Stress Testing for Media Bias

I was recently reminded (H/T pHil R) about Michael Crichton’s insight into our vulnerability to media bias.  He called it the Gell-Mann Amnesia Effect, named after his friend, physicist Gell-Mann. 

“Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.”

Howard Wetsman MD takes it from there in his article A New Corollary to the Gell-Mann Amnesia Effect, suggesting how to approach media reports with critical intelligence. Excerpts in italics with my bolds.

The corollary came to me the other day when I was reading an email string on Addiction Medicine. A couple of fathers of the field had written an article in one of those non-peer reviewed clinical newspapers that each specialty has and shared it with the group. They were showered with praise, so I started reading what they wrote. I was struck that the assumptions they made in their article directly contradicted several of the working assumptions of the group, yet the group expressed nearly universal approval with the conclusions of the article.

So the Hunt Assumption Amnesia Corollary is when experts start reading a paper, note that they disagree with some basic assumptions of the work, but keep reading and accept the conclusions, forgetting they had rejected the assumptions. This effect is rife in Addiction Medicine, and, I suspect, much of academia.

When I first learned to read a scientific paper, I was taught to go through the various sections to understand the limitations of the conclusions I’d read at the end. Did they select the subjects correctly? Did they use the right test for the question? Did they have enough subjects to power the study sufficiently? And many other important questions.

But I’ve come to find in the fullness of time that there are really only two questions I need to know when reading a paper. Were the authors aware that their assumptions are assumptions, and are they questioning them?

I want to pose my own testable hypothesis about how this corollary effect occurs. I think, if I’m right, that we’ll see it in all media.

First, the assumption is stated as fact, but in a muted way so that it slides past the readers assumption filter rather than slamming headlong into it. Then data is piled up to bolster the writer’s thesis by generalizing findings in particular situations to all situations. So, by the end of the piece any disagreement with the assumption is forgotten under the weight of “the evidence.”

A previous post is reprinted below showing how a journalism professor prepares his students to read critically media reports concerning climate change/global warming.

Decoding Climate News

Journalism professor David Blackall provides a professional context for investigative reporting I’ve been doing on this blog, along with other bloggers interested in science and climate change/global warming. His peer reviewed paper is Environmental Reporting in a Post Truth World. The excerpts below show his advice is good not only for journalists but for readers.  h/t GWPF, Pierre Gosselin

Overview: The Grand Transnational Narrative

The dominance of a ‘grand transnational narrative’ in environmental discourse (Mittal, 2012) over other human impacts, like deforestation, is problematic and is partly due to the complexities and overspecialization of climate modelling. A strategy for learning, therefore, is to instead focus on the news media: it is easily researched and it tends to act ‘as one driving force’, providing citizens with ‘piecemeal information’, making it impossible to arrive at an informed position about science, society and politics (Marisa Dispensa et al., 2003). After locating problematic news narratives, Google Scholar can then be employed to locate recent scientific papers that examine, verify or refute news media discourse.

The science publication Nature Climate Change this year, published a study demonstrating Earth this century warmed substantially less than computer-generated climate models predict.

Unfortunately for public knowledge, such findings don’t appear in the news. Sea levels too have not been obeying the ‘grand transnational narrative’ of catastrophic global warming. Sea levels around Australia 2011–2012 were measured with the most significant drops in sea levels since measurements began. . .The 2015–2016 El-Niño, a natural phenomenon, drove sea levels around Indonesia to low levels such that coral reefs were bleaching. The echo chamber of news repeatedly fails to report such phenomena and yet many studies continue to contradict mainstream news discourse.

facebook2bnew2blike2bbuttons2bfinal-970-80I will be arguing that a number of narratives need correction, and while I accept that the views I am about to express are not universally held, I believe that the scientific evidence does support them.

The Global Warming/Climate Change Narrative

The primary narrative in need of correction is that global warming alone (Lewis, 2016), which induces climate change (climate disruption), is due to the increase in global surface temperatures caused by atmospheric greenhouse gases. Instead, there are many factors arising from human land use (Pielke et al., 2016), which it could be argued are responsible for climate change, and some of these practices can be mitigated through direct public action.

Global warming is calculated by measuring average surface temperatures over time. While it is easy to argue that temperatures are increasing, it cannot be argued, as some models contend, that the increases are uniform throughout the global surface and atmosphere. Climate science is further problematized by its own scientists, in that computer modelling, as one component of this multi-faceted science, is privileged over other disciplines, like geology.

Scientific uncertainty arises from ‘simulations’ of climate because computer models are failing to match the actual climate. This means that computer models are unreliable in making predictions.

Published in the eminent journal Nature (Ma, et. al., 2017), ‘Theory of chaotic orbital variations confirmed by Cretaceous geological evidence’, provides excellent stimulus material for student news writing. The paper discusses the severe wobbles in planetary orbits, and these affect climate. The wobbles are reflected in geological records and show that the theoretical climate models are not rigorously confirmed by these radioisotopically calibrated and anchored geological data sets. Yet popular discourse presents Earth as harmonious: temperatures, sea levels and orbital patterns all naturally balanced until global warming affects them, a mythical construct. Instead, the reality is natural variability, the interactions of which are yet to be measured or discovered (Berger, 2013).

In such a (media) climate, it is difficult for the assertion to be made that there might be other sources, than a nontoxic greenhouse gas called carbon dioxide (CO2), that could be responsible for ‘climate disruption’. A healthy scientific process would allow such a proposition. Contrary to warming theory, CO2 levels have increased, but global average temperatures remain steady. The global average temperature increased from 1983 to 1998; then, it flat-lined for nearly 20 years. James Hansen’s Hockey Stick graph, with soaring and catastrophic temperatures, simply did not materialize.

As Keenan et al. (2016) found through using global carbon budget estimates, ground, atmospheric and satellite observations, and multiple global vegetation models that there is also now a pause in the growth rate of atmospheric CO2. They attribute this to increases in terrestrial sinks over the last decade, where forests consume the rising atmospheric CO2 and rapidly grow—the net effect being a slowing in the rate of warming from global respiration.

Contrary to public understanding, higher temperatures in cities are due to a phenomenon known as the ‘urban heat effect’ (Taha, 1997; Yuan & Bauer, 2007). Engines, air conditioners, heaters and heat absorbing surfaces like bitumen radiate heat energy in urban areas, but this is not due to the greenhouse effect. Problematic too are data sets like ocean heat temperatures, sea-ice thickness and glaciers: all of which are varied, some have not been measured or there are insignificant measurement time spans for the data to be reliable.

Contrary to news media reports, some glaciers throughout the world (Norway [Chinn et al., 2005] and New Zealand [Purdie et al., 2008]) are growing, while others shrink (Paul et al., 2007).

Conclusion

This is clearly a contentious topic. There are many agendas at play, with careers at stake. My view represents one side of the debate: it is one I strongly believe in, and is, I contend, supported by the science around deforestation, on the ground, rather than focusing almost entirely on atmosphere. However, as a journalism educator, I also recognize that my view, along with others, must be open to challenge, both within the scientific community and in the court of public opinion.

As a journalism educator, it is my responsibility to provide my students with the research skills they need to question—and test—the arguments put forward by the key players in any debate. Given the complexity of the climate warming debate, and the contested nature of the science that underpins both sides, this will provide challenges well into the future. It is a challenge our students should relish, particularly in an era when they are constantly being bombarded with ‘fake news’ and so-called ‘alternative facts’.

To do so, they need to understand the science. If they don’t, they need to at least understand the key players in the debate and what is motivating them. They need to be prepared to question these people and to look beyond their arguments to the agendas that may be driving them. If they don’t, we must be reconciled to a future in which ‘fake news’ becomes the norm.

Examples of my investigative reports are in Data Vs. Models posts listed at Climate Whack-a-Mole

See also Yellow Climate Journalism

Some suggestions for reading critically National Climate Assessment reports is at Impaired Climate Vision

 

 

Biden’s Arbitrary Social Cost of Carbon: What You Need to Know

The news on Friday was Biden signing another order, this one restoring the so-called “Social Cost of Carbon” to Obama’s $51 a ton, along with threats to raise it up to $125 a ton.  The whole notion is an exercise in imagination for the sake of adding regulatory costs to everything involving energy,  that is to everything.  A background post below describes the history of how this ruse started and the manipulations and arbitrary assumptions to gin up a number high enough to hobble the economy.

Background from 2018 post: US House Votes Down Social Cost of Carbon

The House GOP on Friday took a step forward in reining in the Obama administration’s method of assessing the cost of carbon dioxide pollution when developing regulations.

The House voted 212-201, along party lines, to include a rider blocking the use of the climate change cost metric to an energy and water spending bill.

The amendment offered by Texas Republican Rep. Louie Gohmert bars any and all funds from being used under the bill to “prepare, propose, or promulgate any regulation that relies on the Social Carbon analysis” devised under the Obama administration on how to value the cost of carbon. (Source Washington Examiner, here)

To clarify: the amendment in question defunds any regulation or guidance from the federal government concerning the social costs of carbon.

Background: 
The Obama administration created and increased its estimates of the “Social Cost of Carbon,” invented by Michael Greenstone, who commented on the EPA Proposed Repeal of CO2 emissions regulations.  A Washington Post article, October 11, 2017, included this:

“My read is that the political decision to repeal the Clean Power Plan was made and then they did whatever was necessary to make the numbers work,” added Michael Greenstone, a professor of economics at the University of Chicago who worked on climate policy during the Obama years.

Activists are frightened about the Clean Power Plan under serious attack along three lines:
1. No federal law governs CO2 emissions.
2. EPA regulates sites, not the Energy Sector.
3. CPP costs are huge, while benefits are marginal.

Complete discussion at CPP has Three Fatal Flaws.

Read below how Greenstone and a colleague did exactly what he now complains about.

Social Cost of Carbon: Origins and Prospects

The Obama administration has been fighting climate change with a rogue wave of regulations whose legality comes from a very small base: The Social Cost of Carbon.

The purpose of the “social cost of carbon” (SCC) estimates presented here is to allow agencies to incorporate the social benefits of reducing carbon dioxide (CO2) emissions into cost-benefit analyses of regulatory actions that impact cumulative global emissions. The SCC is an estimate of the monetized damages associated with an incremental increase in carbon emissions in a given year. It is intended to include (but is not limited to) changes in net agricultural productivity, human health, property damages from increased flood risk, and the value of ecosystem services due to climate change. From the Technical Support Document: -Technical Update of the Social Cost of Carbon for Regulatory Impact Analysis -Under Executive Order 12866

A recent Bloomberg article informs on how the SCC notion was invented, its importance and how it might change under the Trump administration.
How Climate Rules Might Fade Away; Obama used an arcane number to craft his regulations. Trump could use it to undo them. (here). Excerpts below with my bolds.

scc-working-group

In February 2009, a month after Barack Obama took office, two academics sat across from each other in the White House mess hall. Over a club sandwich, Michael Greenstone, a White House economist, and Cass Sunstein, Obama’s top regulatory officer, decided that the executive branch needed to figure out how to estimate the economic damage from climate change. With the recession in full swing, they were rightly skeptical about the chances that Congress would pass a nationwide cap-and-trade bill. Greenstone and Sunstein knew they needed a Plan B: a way to regulate carbon emissions without going through Congress.

Over the next year, a team of economists, scientists, and lawyers from across the federal government convened to come up with a dollar amount for the economic cost of carbon emissions. Whatever value they hit upon would be used to determine the scope of regulations aimed at reducing the damage from climate change. The bigger the estimate, the more costly the rules meant to address it could be. After a year of modeling different scenarios, the team came up with a central estimate of $21 per metric ton, which is to say that by their calculations, every ton of carbon emitted into the atmosphere imposed $21 of economic cost. It has since been raised to around $40 a ton.

Trump can’t undo the SCC by fiat. There is established case law requiring the government to account for the impact of carbon, and if he just repealed it, environmentalists would almost certainly sue.

There are other ways for Trump to undercut the SCC. By tweaking some of the assumptions and calculations that are baked into its model, the Trump administration could pretty much render it irrelevant, or even skew it to the point that carbon emissions come out as a benefit instead of a cost.

The SCC models rely on a “discount rate” to state the harm from global warming in today’s dollars. The higher the discount rate, the lower the estimate of harm. That’s because the costs incurred by burning carbon lie mostly in the distant future, while the benefits (heat, electricity, etc.) are enjoyed today. A high discount rate shrinks the estimates of future costs but doesn’t affect present-day benefits. The team put together by Greenstone and Sunstein used a discount rate of 3 percent to come up with its central estimate of $21 a ton for damage inflicted by carbon. But changing that discount just slightly produces big swings in the overall cost of carbon, turning a number that’s pushing broad changes in everything from appliances to coal leasing decisions into one that would have little or no impact on policy.

According to a 2013 government update on the SCC, by applying a discount rate of 5 percent, the cost of carbon in 2020 comes out to $12 a ton; using a 2.5 percent rate, it’s $65. A 7 percent discount rate, which has been used by the EPA for other regulatory analysis, could actually lead to a negative carbon cost, which would seem to imply that carbon emissions are beneficial. “Once you start to dig into how the numbers are constructed, I cannot fathom how anyone could think it has any basis in reality,” says Daniel Simmons, vice president for policy at the American Energy Alliance and a member of the Trump transition team focusing on the Energy Department.

David Kreutzer, a senior research fellow in energy economics and climate change at Heritage and a member of Trump’s EPA transition team, laid out one of the primary arguments against the SCC. “Believe it or not, these models look out to the year 2300. That’s like effectively asking, ‘If you turn your light switch on today, how much damage will that do in 2300?’ That’s way beyond when any macroeconomic model can be trusted.”

Another issue for those who question the Obama administration’s SCC: It estimates the global costs and benefits of carbon emissions, rather than just focusing on the impact to the U.S. Critics argue that this pushes the cost of carbon much higher and that the calculation should instead be limited to the U.S.; that would lower the cost by more than 70 percent, says the CEI’s Mario Lewis.

Still, by narrowing the calculation to the U.S., Trump could certainly produce a lower cost of carbon. Asked in an e-mail whether the new administration would raise the discount rate or narrow the scope of the SCC to the U.S., one person shaping Trump energy and environmental policy replied, “What prevents us from doing both?”

See Also:

Six Reasons to Rescind Social Cost of Carbon

SBC: Social Benefits of Carbon

drain-the-swamp

Updated: Global Warming Ends 2021

The animation is an update of a previous analysis from Dr. Murry Salby.  These graphs use Hadcrut4 and include the 2016 El Nino warming event.  The exhibit shows since 1947 GMT warmed by 0.8 C, from 13.9 to 14.7, as estimated by Hadcrut4.  This resulted from three natural warming events involving ocean cycles. The most recent rise 2013-16 lifted temperatures by 0.2C.  Previously the 1997-98 El Nino produced a plateau increase of 0.4C.  Before that, a rise from 1977-81 added 0.2C to start the warming since 1947.

Importantly, the theory of human-caused global warming asserts that increasing CO2 in the atmosphere changes the baseline and causes systemic warming in our climate.  On the contrary, all of the warming since 1947 was episodic, coming from three brief events associated with oceanic cycles. Moreover, the UAH record shows that the effects of the last one are now gone as of January 2021. Updated to March 2021 (UAH baseline is now 1990-2020)

UAH Global 1995to202103

The 2016 El Nino persisted longer than 1998, and was followed by warming after effects in NH.  The monthly anomaly as 2021 begins is matching the 0.04C average since 1995, an ENSO neutral year prior to the second warming event discussed above. With a quiet sun and cooling oceans, the prospect is for cooler times ahead.

Postscript:  Article by Dr. Arnd Bernaerts regarding ENSO and Climate Models

At Oceans Govern Climate Arnd writes Instead of El Niño, La Niña 2020/21 came. 

12dwbvaornoc6gpwx7rm6ta

He summarizes in this way (in italics with my bolds):

Although ENSO is a long-known climate phenomenon, climatologists still follow the view of the meteorologists 100 years ago, according to which the atmosphere is at the center of all-weather events. They are generously willing to acknowledge that the oceans play an important role, but not that ocean temperatures and their contribution to atmospheric humidity are the most crucial factors. This can be seen in the example of ENSO. Although small in oceanic proportions, the weather above can have long distance effects. Once it happen, e.g. due to a lack of trade winds, the triggering cause remains the changes in equatorial water temperatures.

The attempt to use computer models and weather observation data, by atmosphere-ocean coupling, ENSO forecasts failed with the 2020/2021 forecast and will not achieve what would be necessary in the future either.

What is needed is twofold: (a) much more ocean data , and (b) acknowledging the supremacy of the oceans in climatic change matters. 

No ocean area is as intensive observed as the Equatorial Eastern Pacific (EEP), well over 40 years. Since recently the Tropical Pacific Observing System, TPOS 2020, sustained sampling network is the “backbone” of the system, (Details: WMO). Whether this system can even provide nearly enough oceanic data to make predictions about what is going on under the sea surface cannot be judged here, but it is unlikely and for a long time.

So the other problem remains, the climatologists’ narrow view on the atmosphere. The authors of the El Nino forecast for 2020/21 failed because they lacked the insight that without comprehensive marine data, their model calculations are at best speculations. At least this conclusion should be drawn from their dramatic false prognosis.

In conclusion climatology should realize, that any ocean space, whether in size of a few hundred square miles or as covered by ENSO, plays an important role in climate matters, and that the latter should be regarded as a gift, to understand the mechanism quicker, on who is driving the climate.

Oceans Cold to Start 2021


The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The year end report below showed 2020 rapidly cooling in all regions.  The anomalies have continued to drop sharply and are now well below the mean since 1995.  This Global Cooling was also evident in the UAH Land and Ocean air temperatures (See 2021 Starts with Cool Land and Sea )

The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through January 2021. After three straight Spring 2020 months of cooling led by the tropics and SH, NH spiked in the summer, along with smaller bumps elsewhere.  Now temps everywhere are dropping the last six months, with all regions well below the Global Mean since 2015, matching the cold of 2018, and lower than January 2015. 

A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.  In 2019 all regions had been converging to reach nearly the same value in April.

Then  NH rose exceptionally by almost 0.5C over the four summer months, in August 2019 exceeding previous summer peaks in NH since 2015.  In the 4 succeeding months, that warm NH pulse reversed sharply. Then again NH temps warmed to a 2020 summer peak, matching 2019.  This has now been reversed with all regions pulling the Global anomaly downward sharply.

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year.  A fourth NH bump was lower and peaked in September 2018.  As noted above, a fifth peak in August 2019 and a sixth August 2020 exceeded the four previous upward bumps in NH.

And as before, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.  The major difference between now and 2015-2016 is the absence of Tropical warming driving the SSTs, along with SH anomalies reaching nearly the lowest in this period. Presently both SH and the Tropics are quite cool, with NH coming off its summer peak.  Note the tropical temps descending into La Nina levels.  At this point, the 2016 El Nino and its NH after effects have dissipated completely.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

To enlarge image, single-click or open in new tab.

1995 is a reasonable (ENSO neutral) starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.

The highest summer NH peak came in 2019, only this time the Tropics and SH are offsetting rather adding to the warming. Since 2014 SH has played a moderating role, offsetting the NH warming pulses. Now September 2020 is dropping off last summer’s unusually high NH SSTs. (Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

But the peaks coming nearly every summer in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows August warming began after 1992 up to 1998, with a series of matching years since, including 2020.  Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.

This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The black line shows that 2020 began slightly warm, then set records for 3 months. then dropped below 2016 and 2017, peaked in August and is now below 2016.

Summary

The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.

uss-pearl-harbor-deploys-global-drifter-buoys-in-pacific-ocean

USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

 

Texas Confirms It: 10% Renewable Power Puts Grid at Risk

A recent post here on the Great Texas Blackout of 2021 reinforces the rule of thumb found in other electrical grids exposed to intermittent feeds from wind and solar. That post Data Show Wind Power Messed Up Texas described how the loss of wind power due to frozen turbines caused over 4 million homes in Texas to lose power and who are still short of drinking water.  The Texas sources of electrical power were shown as:

Note that despite wind nameplate capacity of 25 GW, ERCOT is only counting on 33% of wind power to be available.  At 8 GW wind is expected to supply about 10% of the operational capacity.  At 6 pm. Feb. 14, 2021, wind was at 9 GW before collapsing down to 5GW and then to less than 1GW in a few hours.

This matches the pattern of grids going unstable when exceeding about 10% of power generated by wind and/or solar.  Reprinted below is a post explaining the issues.

Background: Climateers Tilting at Windmills 

windmills

Don Quixote ( “don key-ho-tee” ) in Cervantes’ famous novel charged at some windmills claiming they were enemies, and is celebrated in the English language by two idioms:

Tilting at Windmills–meaning attacking imaginary enemies, and

Quixotic (“quick-sottic”)–meaning striving for visionary ideals.

It is clear that climateers are similary engaged in some kind of heroic quest, like modern-day Don Quixotes. The only differences: They imagine a trace gas in the air is the enemy, and that windmills are our saviors.

A previous post (at the end) addresses the unreality of the campaign to abandon fossil fuels in the face of the world’s demand for that energy.  Now we have a startling assessment of the imaginary benefits of using windmills to power electrical grids.  This conclusion comes from Gail Tverberg, a seasoned analyst of economic effects from resource limits, especially energy.  Her blog is called Our Finite World, indicating her viewpoint.  So her dismissal of wind power is a serious indictment.  A synopsis follows. (Title is link to article)

Intermittent Renewables Can’t Favorably Transform Grid Electricity

In fact, I have come to the rather astounding conclusion that even if wind turbines and solar PV could be built at zero cost, it would not make sense to continue to add them to the electric grid in the absence of very much better and cheaper electricity storage than we have today. There are too many costs outside building the devices themselves. It is these secondary costs that are problematic. Also, the presence of intermittent electricity disrupts competitive prices, leading to electricity prices that are far too low for other electricity providers, including those providing electricity using nuclear or natural gas. The tiny contribution of wind and solar to grid electricity cannot make up for the loss of more traditional electricity sources due to low prices.

Let’s look at some of the issues that we are encountering, as we attempt to add intermittent renewable energy to the electric grid.

Issue 1. Grid issues become a problem at low levels of intermittent electricity penetration.

Hawaii consists of a chain of islands, so it cannot import electricity from elsewhere. This is what I mean by “Generation = Consumption.” There is, of course, some transmission line loss with all electrical generation, so generation and consumption are, in fact, slightly different.

The situation is not too different in California. The main difference is that California can import non-intermittent (also called “dispatchable”) electricity from elsewhere. It is really the ratio of intermittent electricity to total electricity that is important, when it comes to balancing. California is running into grid issues at a similar level of intermittent electricity penetration (wind + solar PV) as Hawaii–about 12.3% of electricity consumed in 2015, compared to 12.2% for Hawaii.

Issue 2. The apparent “lid” on intermittent electricity at 10% to 15% of total electricity consumption is caused by limits on operating reserves.

In theory, changes can be made to the system to allow the system to be more flexible. One such change is adding more long distance transmission, so that the variable electricity can be distributed over a wider area. This way the 10% to 15% operational reserve “cap” applies more broadly. Another approach is adding energy storage, so that excess electricity can be stored until needed later. A third approach is using a “smart grid” to make changes, such as turning off all air conditioners and hot water heaters when electricity supply is inadequate. All of these changes tend to be slow to implement and high in cost, relative to the amount of intermittent electricity that can be added because of their implementation.

Issue 3. When there is no other workaround for excess intermittent electricity, it must be curtailed–that is, dumped rather than added to the grid.

Based on the modeling of the company that oversees the California electric grid, electricity curtailment in California is expected to be significant by 2024, if the 40% California Renewable Portfolio Standard (RPS) is followed, and changes are not made to fix the problem.

Issue 4. When all costs are included, including grid costs and indirect costs, such as the need for additional storage, the cost of intermittent renewables tends to be very high.

In Europe, there is at least a reasonable attempt to charge electricity costs back to consumers. In the United States, renewable energy costs are mostly hidden, rather than charged back to consumers. This is easy to do, because their usage is still low.

Euan Mearns finds that in Europe, the greater the proportion of wind and solar electricity included in total generation, the higher electricity prices are for consumers.

Issue 5. The amount that electrical utilities are willing to pay for intermittent electricity is very low.

To sum up, when intermittent electricity is added to the electric grid, the primary savings are fuel savings. At the same time, significant costs of many different types are added, acting to offset these savings. In fact, it is not even clear that when a comparison is made, the benefits of adding intermittent electricity are greater than the costs involved.

Issue 6. When intermittent electricity is sold in competitive electricity markets (as it is in California, Texas, and Europe), it frequently leads to negative wholesale electricity prices. It also shaves the peaks off high prices at times of high demand.

When solar energy is included in the mix of intermittent fuels, it also tends to reduce peak afternoon prices. Of course, these minute-by-minute prices don’t really flow back to the ultimate consumers, so it doesn’t affect their demand. Instead, these low prices simply lead to lower funds available to other electricity producers, most of whom cannot quickly modify electricity generation.

A price of $36 per MWh is way down at the bottom of the chart, between 0 and 50. Pretty much no energy source can be profitable at such a level. Too much investment is required, relative to the amount of energy produced. We reach a situation where nearly every kind of electricity provider needs subsidies. If they cannot receive subsidies, many of them will close, leaving the market with only a small amount of unreliable intermittent electricity, and little back-up capability.

This same problem with falling wholesale prices, and a need for subsidies for other energy producers, has been noted in California and Texas. The Wall Street Journal ran an article earlier this week about low electricity prices in Texas, without realizing that this was a problem caused by wind energy, not a desirable result!

Issue 7. Other parts of the world are also having problems with intermittent electricity.

Needless to say, such high intermittent electricity generation leads to frequent spikes in generation. Germany chose to solve this problem by dumping its excess electricity supply on the European Union electric grid. Poland, Czech Republic, and Netherlands complained to the European Union. As a result, the European Union mandated that from 2017 onward, all European Union countries (not just Germany) can no longer use feed-in tariffs. Doing so provides too much of an advantage to intermittent electricity providers. Instead, EU members must use market-responsive auctioning, known as “feed-in premiums.” Germany legislated changes that went even beyond the minimum changes required by the European Union. Dörte Fouquet, Director of the European Renewable Energy Federation, says that the German adjustments will “decimate the industry.”

Issue 8. The amount of subsidies provided to intermittent electricity is very high.

The US Energy Information Administration prepared an estimate of certain types of subsidies (those provided by the federal government and targeted particularly at energy) for the year 2013. These amounted to a total of $11.3 billion for wind and solar combined. About 183.3 terawatts of wind and solar energy was sold during 2013, at a wholesale price of about 2.8 cents per kWh, leading to a total selling price of $5.1 billion dollars. If we add the wholesale price of $5.1 billion to the subsidy of $11.3 billion, we get a total of $16.4 billion paid to developers or used in special grid expansion programs. This subsidy amounts to 69% of the estimated total cost. Any subsidy from states, or from other government programs, would be in addition to the amount from this calculation.

In a sense, these calculations do not show the full amount of subsidy. If renewables are to replace fossil fuels, they must pay taxes to governments, just as fossil fuel providers do now. Energy providers are supposed to provide “net energy” to the system. The way that they share this net energy with governments is by paying taxes of various kinds–income taxes, property taxes, and special taxes associated with extraction. If intermittent renewables are to replace fossil fuels, they need to provide tax revenue as well. Current subsidy calculations don’t consider the high taxes paid by fossil fuel providers, and the need to replace these taxes, if governments are to have adequate revenue.

Also, the amount and percentage of required subsidy for intermittent renewables can be expected to rise over time, as more areas exceed the limits of their operating reserves, and need to build long distance transmission to spread intermittent electricity over a larger area. This seems to be happening in Europe now.

There is also the problem of the low profit levels for all of the other electricity providers, when intermittent renewables are allowed to sell their electricity whenever it becomes available. One potential solution is huge subsidies for other providers. Another is buying a lot of energy storage, so that energy from peaks can be saved and used when supply is low. A third solution is requiring that renewable energy providers curtail their production when it is not needed. Any of these solutions is likely to require subsidies.

Conclusion

Few people have stopped to realize that intermittent electricity isn’t worth very much. It may even have negative value, when the cost of all of the adjustments needed to make it useful are considered.

Energy products are very different in “quality.” Intermittent electricity is of exceptionally low quality. The costs that intermittent electricity impose on the system need to be paid by someone else. This is a huge problem, especially as penetration levels start exceeding the 10% to 15% level that can be handled by operating reserves, and much more costly adjustments must be made to accommodate this energy. Even if wind turbines and solar panels could be produced for $0, it seems likely that the costs of working around the problems caused by intermittent electricity would be greater than the compensation that can be obtained to fix those problems.

The economy does not perform well when the cost of energy products is very high. The situation with new electricity generation is similar. We need electricity products to be well-behaved (not act like drunk drivers) and low in cost, if they are to be successful in growing the economy. If we continue to add large amounts of intermittent electricity to the electric grid without paying attention to these problems, we run the risk of bringing the whole system down.

Why the Quest to Reduce Fossil Fuel Emissions is Quixotic

Roger Andrews at Energy Matters puts into context the whole mission to reduce carbon emissions. You only have to look at the G20 countries, who have 64% of the global population and use 80% of the world’s energy. The introduction to his essay, Electricity and energy in the G20:

While governments fixate on cutting emissions from the electricity sector, the larger problem of cutting emissions from the non-electricity sector is generally ignored. In this post I present data from the G20 countries, which between them consume 80% of the world’s energy, summarizing the present situation. The results show that the G20 countries obtain only 41.5% of their total energy from electricity and the remaining 58.5% dominantly from oil, coal and gas consumed in the non-electric sector (transportation, industrial processes, heating etc). So even if they eventually succeed in obtaining all their electricity from low-carbon sources they would still be getting more than half their energy from high-carbon sources if no progress is made in decarbonizing their non-electric sectors.

The whole article is enlightening, and shows how much our civilization depends on fossil fuels, even when other sources are employed. The final graph is powerful (thermal refers to burning of fossil fuels):

Figure 12: Figure 9 with Y-scale expanded to 100% and thermal generation included, illustrating the magnitude of the problem the G20 countries still face in decarbonizing their energy sectors.

The requirement is ultimately to replace the red-shaded bars with shades of dark blue, light blue or green – presumably dominantly light blue because nuclear is presently the only practicable solution.

Summary

There is another way. Adaptation means accepting the time-honored wisdom that weather and climates change in ways beyond our control. The future will have periods both cooler and warmer than the present and we must prepare for both contingencies. Colder conditions are the greater threat to human health and prosperity.  The key priorities are robust infrastructures and reliable, affordable energy.

Footnote:

This video shows Don Quixote might have more success against modern windmills.

Data Show Wind Power Messed Up Texas

Yes, with hindsight you can blame Texas for not winter weather proofing fossil fuel supplies as places do in more northern latitudes.  But it was over-reliance on wind power that caused the problem and made it intractable.  John Peterson explains in his TalkMarkets article How Wind Power Caused The Great Texas Blackout Of 2021.  Excerpts in italics with my bolds.

  • The State of Texas is suffering from a catastrophic power grid failure that’s left 4.3 million homes without electricity, including 1.3 million homes in Houston, the country’s fourth-largest city.
  • While talking heads, politicians, and the press are blaming fossil fuels and claiming that more renewables are the solution, hard data from the Energy Information Administration paints a very different picture.
  • The generation failures that led to The Great Texas Blackout of 2021 began at 6 pm on Sunday. Wind power fell from 36% of nameplate capacity to 22% before midnight and plummeted to 3% of nameplate capacity by 8 pm on Monday.
  • While power producers quickly ramped production to almost 90% of dedicated natural gas capacity, a combination of factors including shutdowns for scheduled maintenance and a statewide increase in natural gas demand began to overload safety systems and set-off a cascade of shutdowns.
  • While similar overload-induced shutdowns followed suit in coal and nuclear plants, the domino effect began with ERCOT’s reckless reliance on unreliable wind power.

The ERCOT grid has 85,281 MW of operational generating capacity if no plants are offline for scheduled maintenance. Under the “Winter Fuel Types” tab of its Capacity, Demand and Reserves Report dated December 16, 2020, ERCOT described its operational generating capacity by fuel source as follows:

Since power producers frequently take gas-fired plants offline for scheduled maintenance in February and March when power demand is typically low, ERCOT’s systemwide generating capacity was less than 85 GW and its total power load was 59.6 GW at 9:00 am on Valentines Day. By 8:00 pm, power demand has surged to 68 GW (14%). Then hell froze over. Over the next 24 hours, statewide power production collapsed to 43.5 GW (36%) and millions of households were plunged into darkness in freezing weather conditions.

I went to the US Energy Information Administration’s website and searched for hourly data on electricity production by fuel source in the State of Texas. The first treasure I found was this line graph that shows electricity generation by fuel source from 12:01 am on February 10th through 11:59 pm on February 16th.

The second and more important treasure was a downloadable spreadsheet file that contained the hourly data used to build the graph. An analysis of the hourly data shows:

  • Wind power collapsing from 9 GW to 5.45 GW between 6 pm and 11:59 pm on the 14th with natural gas ramping from 41 GW to 43 GW during the same period.
  • Wind power falling from 5.45 GW to 0.65 GW between 12:01 am and 8:00 pm on the 15th with natural gas spiking down from 40.4 GW to 33 GW between 2 am and 3 am as excess demand caused a cascade of safety events that took gas-fired plants offline.
  • Coal power falling from 11.1 GW to 7.65 GW between 2:00 am and 3:00 pm on the 15th as storm-related demand overwhelmed generating capacity.
  • Nuclear power falling from 5.1 GW to 3.8 GW at 7:00 am on the 15th as storm-related demand overwhelmed generating capacity.

The following table summarizes the capacity losses of each class of generating assets.

The Great Texas Blackout of 2021 was a classic domino-effect chain reaction where unreliable wind power experienced a 40% failure before gas-fired power plants began to buckle under the strain of an unprecedented winter storm. There were plenty of failures by the time the dust settled, but ERCOT’s reckless reliance on unreliable wind power set up the chain of dominoes that brought untold suffering and death to Texas residents.

The graph clearly shows that during their worst-performing hours:

  • Natural gas power plants produced at least 60.2% of the power available to Texas consumers, or 97% of their relative contribution to power supplies at 6:00 pm on Valentine’s day;
  • Coal-fired power plants produced at least 15.6% of the power available to Texas consumers, or 95% of their relative contribution to power supplies at 6:00 pm on Valentine’s day;
  • Nuclear power plants produced at least 7.5% of the power available to Texas consumers, or 97% of their relative contribution to power supplies at 6:00 pm on Valentine’s day; and
  • Wind power plants produced 1.5% of the power available to Texas consumers, or 11% of their relative contribution to power supplies at 6:00 pm on Valentine’s day; and
  • Solar power plants did what solar power plants do and had no meaningful impact.

Conclusion

Now that temperatures have moderated, things are getting back to normal, and The Great Texas Blackout of 2021 is little more than an unpleasant memory. While some Texas consumers are up in arms over blackout-related injuries, the State has rebounded, and many of us believe a few days of inconvenience is a fair price to pay for decades of cheap electric power. I think the inevitable investigations and public hearings will be immensely entertaining. I hope they lead to modest reforms of the free-wheeling ERCOT market that prevent irresponsible action from low-cost but wildly unreliable electricity producers from wind turbines.

Over the last year, wind stocks like Vestas Wind Systems (VWDRY) TPI Composites (TPIC) Northland Power (NPIFF), American Superconductor (AMSC), and NextEra Energy (NEE) have soared on market expectations of unlimited future growth. As formal investigations into the root cause of The Great Texas Blackout of 2021 proceed to an inescapable conclusion that unreliable wind power is not suitable for use in advanced economies, I think market expectations are likely to turn and turn quickly. I won’t be surprised if the blowback from The Great Texas Blackout of 2021 rapidly bleeds over to other overvalued sectors that rely on renewables as the heart of their raison d’etre, including vehicle electrification.

Supremes Steer Clear of Penn Case of Election Fraud

JUST IN – U.S. Supreme Court refuses to review #Pennsylvania election cases. No standing before an election, moot after. Justices Alito, Gorsuch, and Thomas dissent from the denial. Since it only takes 4 justices to hear a case, these cases were only one vote away from getting a full hearing at the SCOTUS. (Source: Disclose.tv tweet)  Excerpts in italics with my bolds from dissenting opinions. Full text available at Gateway Pundit post Supreme Court Refuses to Review Pennsylvania Election Cases – Alito, Gorsuch and Thomas Dissent.

Justice Thomas:

Changing the rules in the middle of the game is bad enough. Such rule changes by officials who may lack authority to do so is even worse. When those changes alter election results, they can severely damage the electoral system on which our self-governance so heavily depends. If state officials have the authority they have claimed, we need to make it clear. If not, we need to put an end to this practice now before the consequences become catastrophic.

Because the judicial system is not well suited to address these kinds of questions in the short time period available immediately after an election, we ought to use available cases outside that truncated context to address these admittedly important questions. Here, we have the opportunity to do so almost two years before the next federal election cycle. Our refusal to do so by hearing these cases is befuddling. There is a clear split on an issue of such great importance that both sides previously asked us to grant certiorari. And there is no dispute that the claim is sufficiently meritorious to warrant review. By voting to grant emergency relief in October, four Justices made clear that they think petitioners are likely to prevail. Despite pressing for review in October, respondents now ask us not to grant certiorari because they think the cases are moot. That argument fails.

The issue presented is capable of repetition, yet evades review. This exception to mootness, which the Court routinely invokes in election cases, “applies where (1) the challenged action is in its duration too short to be fully litigated prior to cessation or expiration, and (2) there is a reasonable expectation that the same complaining party will be subject to the same action again.”

And there is a reasonable expectation that these petitioners—the State Republican Party and legislators—will again confront non legislative officials altering election rules. In fact, various petitions claim that no fewer than four other decisions of the Pennsylvania Supreme Court implicate the same issue.  Future cases will arise as lower state courts apply those precedents to justify intervening in elections and changing the rules.

One wonders what this Court waits for. We failed to settle this dispute before the election, and thus provide clear rules. Now we again fail to provide clear rules for future elections. The decision to leave election law hidden beneath a shroud of doubt is baffling. By doing nothing, we invite further confusion and erosion of voter confidence. Our fellow citizens deserve better and expect more of us. I respectfully dissent.

Justice Alito, joined by Justice Gorsuch:

Now, the election is over, and there is no reason for refusing to decide the important question that these cases pose. . .A decision in these cases would not have any implications regarding the 2020 election. . . But a decision would provide invaluable guidance for future elections.

Some respondents contend that the completion of the 2020 election rendered these cases moot and that they do not fall within the mootness exception for cases that present questions that are “capable of repetition” but would other-wise evade review.  They argue that the Pennsylvania Supreme Court’s decision “arose from an extraordinary and unprecedented confluence of circumstances”—specifically, the COVID–19 pandemic, an increase in mail-in voting, and Postal Service delays—and that such a perfect storm is not likely to recur. 

That argument fails for three reasons. First, it does not acknowledge the breadth of the Pennsylvania Supreme Court’s decision. That decision claims that a state constitutional provision guaranteeing “free and equal” elections gives the Pennsylvania courts the authority to override even very specific and unambiguous rules adopted by the legislature for the conduct of federal elections. . .That issue is surely capable of repetition in future elections. Indeed, it would be surprising if parties who are unhappy with the legislature’s rules do not invoke this decision and ask the state courts to substitute rules that they find more advantageous.

Second, the suggestion that we are unlikely to see a recurrence of the exact circumstances we saw this fall misunderstands the applicable legal standard. In order for a question to be capable of repetition, it is not necessary to predict that history will repeat itself at a very high level of specificity.

Third, it is highly speculative to forecast that the Pennsylvania Supreme Court will not find that conditions at the time of a future federal election are materially similar to those last fall. The primary election for Pennsylvania congressional candidates is scheduled to occur in 15 months,and the rules for the conduct of elections should be established well in advance of the day of an election. . .As voting by mail becomes more common and more popular, the volume of mailed ballots may continue to increase and thus pose delivery problems similar to those anticipated in 2020.

For these reasons, the cases now before us are not moot. There is a “reasonable expectation” that the parties will face the same question in the future. . ., and that the question will evade future pre-election review, just as it did in these cases.These cases call out for review, and I respectfully dissent from the Court’s decision to deny certiorari. 

Background:  SCOTUS Conference on Election Integrity

Election Integrity is up for conference at SCOTUS on Friday.  The petition to be discussed is the complaint by the Pennsylvania legislature against the state Election Officer Boockvar, a proceeding that began on Sept. 28, 2020.  The petition makes clear the intent is not to overturn any completed election, but to ensure future elections are conducted according to laws in force.  From scotusblog:

Republican Party of Pennsylvania v. Boockvar

Issue:  Whether the Pennsylvania Supreme Court usurped the Pennsylvania General Assembly’s plenary authority to “direct [the] Manner” for appointing electors for president and vice president under Article II of the Constitution, as well as the assembly’s broad power to prescribe “[t]he Times, Places, and Manner” for congressional elections under Article I, when the court issued a ruling requiring the state to count absentee ballots that arrive up to three days after Election Day as long as they are not clearly postmarked after Election Day; and (2) whether that decision is preempted by federal statutes that establish a uniform nationwide federal Election Day.

The petition to be discussed is the December 15, 2020 brief from the petitioners Republican Party:

No. 20-542 REPLY BRIEF IN SUPPORT OF PETITION FOR A WRIT OF CERTIORARI

Respondents’ Oppositions only confirm what some
Respondents told the Court just weeks ago: that the
Court should grant review and resolve the important
and recurring questions presented in this case. Pa.
Dems. Br. 9, No. 20A54 (Oct. 5, 2020) (advocating for
review because the questions presented are “of
overwhelming importance for States and voters across
the country”); Sec’y Br. 2-3, No. 20A54 (Oct. 5, 2020).
Respondents uniformly fail to mention that after the
Republican Party of Pennsylvania (RPP) filed its
Petition but more than a month before Respondents
filed their Oppositions, the Eighth Circuit created a
split on the question whether the Electors Clause
constrains state courts from altering election
deadlines enacted by state legislatures. See Carson v.
Simon, 978 F.3d 1051 (8th Cir. 2020). Instead,
Respondents seek to obfuscate the matter with a
welter of vehicle arguments turning on the fact that
Pennsylvania has certified the results of the 2020
general election. In reality, however, this case is an
ideal vehicle, in part precisely because it will not affect
the outcome of this election.

Indeed, this Court has repeatedly emphasized the
imperative of settling the governing rules in advance
of the next election, in order to promote the public
“[c]onfidence in the integrity of our electoral processes
[that] is essential to the functioning of our
participatory democracy.” Purcell v. Gonzalez, 549
U.S. 1, 4 (2006). This case presents a vital and unique
opportunity to do precisely that. By resolving the
important and recurring questions now, the Court can
provide desperately needed guidance to state
legislatures and courts across the country outside the
context of a hotly disputed election and before the next
election. The alternative is for the Court to leave
legislatures and courts with a lack of advance
guidance and clarity regarding the controlling law
only to be drawn into answering these questions in
future after-the-fact litigation over a contested
election, with the accompanying time pressures and
perceptions of partisan interest.

Note:  As reported in Gateway Pundit, legally required chain of custody for ballots was broken in every battleground state and in other states as well.

Democrats Were ONLY Able to “Win” in 2020 By Breaking Chain of Custody Laws in EVERY SWING STATE

President Trump was ahead in Pennsylvania by nearly 700,000 votes.
In Michigan Trump was ahead by over 300,000 votes.
In Wisconsin Trump was ahead by 120,000 votes.

Trump was also ahead in Georgia and Nevada.

And President Trump already trounced Joe Biden in Ohio, Florida, and Iowa — three states that ALWAYS go to the eventual presidential winner.

Then suddenly Pennsylvania, Michigan, and Wisconsin announced they would not be announcing their winner that night. This was an unprecedented and coordinated move in US history.

Then many crimes occurred to swing the election to Biden, but perhaps the greatest crime was the lack of dual controls and chain of custody records that ensure a fair and free election. At a high level, when ballots are transferred or changes are made in voting machines, these moves and changes should be done with two individuals present (dual control), one from each party, and the movements of ballots should be recorded.

So when states inserted drop boxes into the election, these changes first needed to be updated through the legislature, which they weren’t, and all movements from the time when the ballots were inserted into drop boxes needed to be recorded, which they weren’t.

Path Out of Covid Nightmare

WSJ posted an interview with Dr. Makary at a post The Perpetual Covid Crisis.  Some comments in italics wtih my bolds.

The lockdown lobby persists despite the vaccine rollout.

 

https://au.tv.yahoo.com/embed/wall-street-journal/wsj-opinion-path-covid-nightmare-204330227.html

Vaccination rates in Texas and other states have been increasing while hospitalizations are plunging. About one in five adults in Texas has received at least one dose of the Pfizer or Moderna vaccine. Most are seniors and people with health conditions who are at highest risk of severe illness. Hospitalizations in Texas have fallen more than 60% since a mid-January peak.

Politicians created a box canyon with lockdowns last spring that were originally intended to “flatten the curve.” But then every time governors loosened restrictions and cases ticked up, Democrats would demand lockdowns. Not that lockdowns (or mask mandates) much helped California or New York, which experienced bigger surges this winter than Florida did with neither.

Background from Previous Post  Immunity by Easter?

Could it be that doors and societies will open and life be reborn as early as Easter 2021?  That depends upon lockdown politicians and scientists who advise them.  One such is Dr. Makary, a professor at the Johns Hopkins School of Medicine and Bloomberg School of Public Health, chief medical adviser to Sesame Care, and author of “The Price We Pay.”.  His article at Wall Street Journal is We’ll Have Herd Immunity by April.  Excerpts in italics with my bolds.

Covid cases have dropped 77% in six weeks. Experts should level with the public about the good news.

Amid the dire Covid warnings, one crucial fact has been largely ignored: Cases are down 77% over the past six weeks. If a medication slashed cases by 77%, we’d call it a miracle pill. Why is the number of cases plummeting much faster than experts predicted?

In large part because natural immunity from prior infection is far more common than can be measured by testing.

Testing has been capturing only from 10% to 25% of infections, depending on when during the pandemic someone got the virus. Applying a time-weighted case capture average of 1 in 6.5 to the cumulative 28 million confirmed cases would mean about 55% of Americans have natural immunity.

Now add people getting vaccinated. As of this week, 15% of Americans have received the vaccine, and the figure is rising fast. Former Food and Drug Commissioner Scott Gottlieb estimates 250 million doses will have been delivered to some 150 million people by the end of March.

There is reason to think the country is racing toward an extremely low level of infection. As more people have been infected, most of whom have mild or no symptoms, there are fewer Americans left to be infected. At the current trajectory, I expect Covid will be mostly gone by April, allowing Americans to resume normal life.

Antibody studies almost certainly underestimate natural immunity. Antibody testing doesn’t capture antigen-specific T-cells, which develop “memory” once they are activated by the virus. Survivors of the 1918 Spanish flu were found in 2008—90 years later—to have memory cells still able to produce neutralizing antibodies.

Researchers at Sweden’s Karolinska Institute found that the percentage of people mounting a T-cell response after mild or asymptomatic Covid-19 infection consistently exceeded the percentage with detectable antibodies. T-cell immunity was even present in people who were exposed to infected family members but never developed symptoms. A group of U.K. scientists in September pointed out that the medical community may be under-appreciating the prevalence of immunity from activated T-cells.

Covid-19 deaths in the U.S. would also suggest much broader immunity than recognized. About 1 in 600 Americans has died of Covid-19, which translates to a population fatality rate of about 0.15%. The Covid-19 infection fatality rate is about 0.23%. These numbers indicate that roughly two-thirds of the U.S. population has had the infection.

In my own conversations with medical experts, I have noticed that they too often dismiss natural immunity, arguing that we don’t have data. The data certainly doesn’t fit the classic randomized-controlled-trial model of the old-guard medical establishment. There’s no control group. But the observational data is compelling.

I have argued for months that we could save more American lives if those with prior Covid-19 infection forgo vaccines until all vulnerable seniors get their first dose. Several studies demonstrate that natural immunity should protect those who had Covid-19 until more vaccines are available. Half my friends in the medical community told me: Good idea. The other half said there isn’t enough data on natural immunity, despite the fact that reinfections have occurred in less than 1% of people—and when they do occur, the cases are mild.

But the consistent and rapid decline in daily cases since Jan. 8 can be explained only by natural immunity. Behavior didn’t suddenly improve over the holidays; Americans traveled more over Christmas than they had since March. Vaccines also don’t explain the steep decline in January. Vaccination rates were low and they take weeks to kick in.

My prediction that Covid-19 will be mostly gone by April is based on laboratory data, mathematical data, published literature and conversations with experts. But it’s also based on direct observation of how hard testing has been to get, especially for the poor. If you live in a wealthy community where worried people are vigilant about getting tested, you might think that most infections are captured by testing. But if you have seen the many barriers to testing for low-income Americans, you might think that very few infections have been captured at testing centers. Keep in mind that most infections are asymptomatic, which still triggers natural immunity.

Many experts, along with politicians and journalists, are afraid to talk about herd immunity. The term has political overtones because some suggested the U.S. simply let Covid rip to achieve herd immunity. That was a reckless idea. But herd immunity is the inevitable result of viral spread and vaccination. When the chain of virus transmission has been broken in multiple places, it’s harder for it to spread—and that includes the new strains.

Herd immunity has been well-documented in the Brazilian city of Manaus, where researchers in the Lancet reported the prevalence of prior Covid-19 infection to be 76%, resulting in a significant slowing of the infection. Doctors are watching a new strain that threatens to evade prior immunity. But countries where new variants have emerged, such as the U.K., South Africa and Brazil, are also seeing significant declines in daily new cases. The risk of new variants mutating around the prior vaccinated or natural immunity should be a reminder that Covid-19 will persist for decades after the pandemic is over. It should also instill a sense of urgency to develop, authorize and administer a vaccine targeted to new variants.

Some medical experts privately agreed with my prediction that there may be very little Covid-19 by April but suggested that I not to talk publicly about herd immunity because people might become complacent and fail to take precautions or might decline the vaccine. But scientists shouldn’t try to manipulate the public by hiding the truth. As we encourage everyone to get a vaccine, we also need to reopen schools and society to limit the damage of closures and prolonged isolation. Contingency planning for an open economy by April can deliver hope to those in despair and to those who have made large personal sacrifices.

Don’t Fence Me In!

Why Team Left Cheats More than Team Right

One of the few pleasures remaining during pandemania involves sports competitions where rules are followed and enforced by unbiased officials, so that teams or individuals win or lose based solely on the merit of their performances.  Elsewhere with identity politics and political correctness, it is a different story.  People on the right perceive accurately that their opponents on the left are not bound by the rules, and break them readily in order to win.

Brent E. Hamachek explains in his blog post Why They Cheat-a look at the behavioral differences between Team Right and Team Left.  Excerpts in italics with my bolds.

America is divided into two political teams; Team Right and Team Left. As Joe Biden and Kamala Harris assume office, many Team Right members are still trying to come to terms with the results of the 2020 election. They feel certain that Team Left cheated in a variety of ways in order to produce enough votes to secure victory.

Setting aside the MSM’s agreed-upon talking points of “baseless accusations” of election fraud and their “despite there being no evidence to support such claims” mantra, we now know that there was significant evidence of election tampering. That is actually a “fact” about which I’ve previously written. It is also, at this point, irrelevant. Joe Biden is in office. Focusing on 2020 election cheating is fine for investigators in various states if they so choose (there will be no federal investigation), but it is not helpful for ordinary citizens who would like to reverse trends.

The more helpful issue to explore in order to make a difference going forward is in answering this question: Why do Team Left members seem to be more willing to cheat than do Team Right members?

This is a question, I believe, that we can answer without needing any sort of physical proof. We can prove it solely through the use of our reason and with a clear understanding of the ethical structure, and attendant influences on behavior, of modern-day Team Left members (many of whom were election officials and vote counters).

When the typical person says they are “ethical,” they really mean that in their mind the things they do are the right things to do. This suggests a sort of self-legislating capability on the part of each person to know right from wrong. An idea like this can be found in the work of famous philosophers ranging from Immanuel Kant, to Karl Marx, to many others. They argue that each person is capable of such self-legislating and engage in the process constantly.

Very few people realize that there are actual ethical systems that have been “constructed” to help direct us on the path to making consistent and appropriate decisions as to how to act and behave in any given situation. We have the above-referenced Kant’s categorical imperative (if what I’m thinking of doing now were a rule that everyone had to follow, would it be workable for society?). We have Jeremy Bentham’s utilitarianism (pure cost-benefit analysis) or John Stuart Mill’s more refined and kinder version, which calls for for cost-benefit analysis with an allowance for the subjective nature of “higher” human values.

There are a number of ways to view the development and deployment of moral and ethical behavior, but the typical person knows little, if any, of this. Yet they will tell you that they are ethical, and others are not. By what standard? How do they know? This logical dilemma, by the way, exists in people whether they were supporters of Donald Trump or Joe Biden; whether they are members of Team Right or Team Left. There is absolutely no difference in that respect. There is a difference we will get to eventually, but it does not involve ethics.

Hobbes was right!

It is my opinion, based upon many years of studying political philosophy, working in a large corporate environment, working with and running privately owned businesses, and doing political advising and writing, that the greatest of all the political philosophers, the one who got the most important thing right, was Englishman Thomas Hobbes. Born in 1588, the year of the Spanish Armada, it is said that his mother went into premature labor upon seeing the ships off the English coast, thereby birthing poor Thomas out of fear.

Hobbes spent the rest of his life focusing on the fearful nature of humans, among other things.

He is the father of social contract theory, which describes man’s compact to enter into civil society as a way to control his more primitive impulses. He is famous for his line about man’s life in the state of nature, before the social contract, which he describes as being “solitary, poor, nasty, brutish, and short.” Hobbes suggested that, owing to their nature, men are unable to be left to govern themselves without stern direction. His diagnosis of us as people? Fearful and self-destructive. His prescription? A strong sovereign.

Hobbes is also the father of the idea of moral relativism. His contention is that, for the typical human, their calculation of whether or not something is “right or wrong” is nothing more than a reduction to looking at things that please them and things that offend them. They maximize the one and avoid the other. In that process, they create their own morality, or set of ethics, that is based solely upon their own desires and aversions.

My own fifty-eight years of study and empirical observations have led me to conclude that this theory of human behavior and ethical development most accurately describes the greatest number of people Assuming a human population existing under a bell curve, Hobbes’s ethical construct describes the greatest number of people gathered around the mean.

At this point you might think I’m suggesting that Biden supporters, Team Left members, are moral relativists and Trump supporters, Team Right members, are not. That somehow I believe we are inherently better creatures than are they. You’d be wrong. I am not. I believe that most people are moral relativists in general, and even that people who attempt to operate under a more disciplined structure of ethics, including the Christian ethic, can become moral relativists at the very moment that they find themselves placed most at risk.

Survival is in our nature. When it is in jeopardy, even the most truly righteous can attempt to hedge their ethical bets.

Since I am concluding that there is no fundamental difference in ethics between the typical Trump or the typical Biden supporter, why go through all the trouble to share this background on ethics? After all, the purpose is to demonstrate how we can prove that Team Left members are more likely to cheat. I walked through the ethical piece because people typically consider cheating to be “unethical.” Yet it happens, and it happens more by their team than by ours.

To understand why, I believe we need to look beyond ethics and consider Tom Hanks, World War II, and the ancient Stoics.

Duty as a differentiator

Love or hate his personal life and politics, Tom Hanks makes spectacular movies and is especially good in war roles. A few months back, I had a chance to watch him in the Apple Television release of Greyhound. It is a story based on the U.S. Navy convoys that brought supplies and armaments across the Atlantic during World War II. It is not a long film, but it is nonstop action packed. For ninety minutes, there is nothing but German U-boat peril. American sailors show incredible courage, some losing their lives, others saving lives, up against challenging odds.
What happens to make men so courageous in one moment and so devoid of any kind of ethical or moral compass in the next? I think the answer lies in the notion of duty. Those men on the ship with Tom Hanks in that movie were driven in those moments by a higher calling. They had a sense of duty. Some, when they returned home, for whatever reason might have lost their way; found themselves left with no higher calling. Absent duty, they were left with only their own personal moral and ethical framework in which to operate. Given moral relativism, they became able to justify almost any behavior.

This notion of duty is a very Stoic concept. Stoicism, which dates back to Ancient Greece, emphasizes duty and the importance of virtue. There were four attributes of virtue: wisdom, justice, courage, and moderation. Doing one’s duty was central to the Stoics. Duty manifested itself in more than just following orders; it meant adhering to the four key elements of virtue and to keeping in sync with all of nature.

One does not have to buy into all of Stoic philosophy to grasp the importance of duty. It is with duty that we can begin to answer our question: How can we know that Team Left members will cheat?

The answer lies in the absence of a sense of duty to something outside themselves. The typical contemporary Team Left member does not have any external force that commands him or her to “behave better.”

Again, operating under the bell curve, the mainstream Trump supporter tries to follow either the voice of God, the call of patriotism, or both. Both are external to themselves. Both set standards for behavior that transcend their own personal calculations of convenience. Both provide fairly clear direction, either through Scripture or the Constitution. Both rest like weights upon their shoulders, burdening them with a non-ignorable sense of obligation.

It doesn’t mean they won’t fail. It doesn’t mean they will not behave badly. It simply means they have a better chance of making a better choice than does a person who is not encumbered by any sense of duty other than to themselves. Duty is typically viewed as a call to act. It can just as easily be seen as the antithesis to action, which means it can inhibit. I must because it’s my duty. I must not because it betrays my duty.

Common responses I have received from Team Left members over the years when I ask them about feeling a sense of duty include:

• I have a duty to those around me.

• I have a duty to those less fortunate than myself.

• I have a duty to humanity.

The shared characteristic of each of those “duties” is that although they sound as if they reside “outside” the individual, they are wholly subjective with regard to their definition. Each individual person gets to define their “duty to others” however they see fit. There is no separate standard. For those focused on a Christian duty, there is the reasonable clarity of the Bible. For those who pledge allegiance to the United States of America, there is our Constitution bolstered by the original Declaration of Independence.

For those, however, who say that they simply have a duty to help “others,” the others can be whomever they so choose, and need whatever kind of help it is the helper decides they should provide.

Machiavelli provides the final element

To succinctly summarize my thoughts to this point, it is my personal belief that the members of Team Right are not inherently any more ethical than are their counterparts on Team Left. When it comes right down to it, individual to individual, most people are basic moral relativists as identified and defined by Hobbes, and given no other considerations, most people conduct themselves under an ethical code that is simply one of convenience.

The difference between the two is that those who answer to a calling of duty that is outside themselves and more objective than subjective in nature can have their individual passions held in check. It gives their better angels a chance to be heard and followed.

Machiavelli’s statement about ends and means explains why the modern-day Team Left member, almost always a Democrat, is so willing to cheat. Existing as a typical moral relativist where little to nothing is malum in se, and being for the most part unconstrained by a sense of duty other than that which they conveniently self-define, any sort of activity is permissible so long as they end up getting what they want. They give cover to this behavior by saying their actions are necessary to “help others.” As has been shown, that statement can mean whatever they want it to mean.

By our nature as humans, we are flawed and sinful creatures. That goes for Trump supporters as well as those who lined up behind Joe Biden. The difference is that for those of us who truly have a good old-fashioned love for God, country, or both, we have a voice outside ourselves warning us to control our nature. It asks us to heed a higher calling. It limits us in a way that is beneficial to maintaining an ordered, predictable, and just society.

Those who operate without that sense of duty are left to do whatever their free will wishes, unbound by any real constraints. They can justify their actions through the simple pleasure they feel or the pain they avoid. Their ends always can justify their means.  That is why they cheat. That is how we can use our reason to know they cheat.

Postscript:  Dennis Prager sees the left/right distinction in terms of focus on politics vs. persons.

That’s a major difference between the right and the left, concerning the way each seeks to improve society. Conservatives believe that the way to a better society is almost always through the moral improvement of the individual by each person doing battle with his or her own weaknesses, and flaws. It is true that in violent and evil society such as fascist Communist or Islam is tyrannies, the individual must be preoccupied with battling outside forces. Almost everywhere else, though, certainly in a free and decent country such as America, the greatest Battle of the individual must be with inner forces, that is with his or her moral failings.

The left on the other hand, believes that the way to a better society is almost always through doing battle with society’s moral failings. Thus, in America, the left concentrates its efforts on combating sexism, racism, intolerance, xenophobia, homophobia, Islamophobia, and the many other evils that the left believes permeate American society.

One important consequence of this left right distinction is that those on the left are far more preoccupied with politics than those on the Right. Since the left is so much more interested in fixing society than in fixing the individual, politics inevitably becomes the vehicle for societal improvement. That’s why whenever the term activist is used, we almost always assume that the term refers to someone on the left.

See also: Left and Right on Climate (and so much else)

See also: Climate Science, Ethics and Religion