Conn AG Adds to Climate Lawsuit Dominos

Climate Dominos

William Allison reports at Energy In Depth Echoes of New York’s Failure:  Connecticut Files Climate Lawsuit.  Excerpts in italics with my bolds.

Four Years In The Making, But The Same Failed Arguments

Back in 2016, Tong’s predecessor, former Attorney General George Jespen, enlisted Connecticut to take part in former New York Attorney General’s Eric Schneiderman’s “AGs United for Clean Power” – a coalition of state attorneys general that aimed to investigate major energy companies over climate change. Not only did Jepsen participate in the March 2016 press conference announcing the coalition, he discussed how it’s formation allowed for easier collaboration between attorneys general.

While the coalition ultimately fell apart, following the withdrawal of several attorneys general and scrutiny over the political motivation behind its formation, the group’s demise (and even New York’s unsuccessful lawsuit) hasn’t stopped Tong. In fact, he’s using many of the same arguments that Schneiderman ineffectively deployed nearly five years ago.

In Monday’s press conference announcing the Connecticut lawsuit, AG Tong said:

“We tried to think long and hard about what our best and most impactful contribution would be. And what we settled on was a single defendant with a very simple claim: Exxon knew, and they lied.” (emphasis added)

Apparently Tong did not get the memo that “Exxon Knew” – the theory pushed by activists and lawyers that the company knew about climate change and hid that knowledge from the public – has been completely debunked. It was this theory that Schneiderman initially built his case against the company around, but he was forced to abandon it because the facts were not on his side. Indeed, after his successor was told to told “to put or shut up” on the accusations, the lawsuit was revised to remove these claims and instead focus on alleged accounting fraud. The case resulted in a resounding defeat for the New York attorney general, with State Supreme Court Justice Barry Ostrager calling the lawsuit “hyperbolic” and “without merit.”

The only other two climate lawsuits that have been decided on their merits were filed by San Francisco and Oakland and then New York City, both of which failed.

Connecticut Has Benefitted From The National Campaign

The Connecticut lawsuit isn’t a standalone effort, but part of a larger national campaign supported and funded by activist and wealthy donors to pursue climate litigation against energy companies.

Tong is still pursing the “Exxon Knew” angle that’s been developed by this campaign despite its previous losses and thinks his lawsuit is the strongest in the nation because Connecticut’s Connecticut Unfair Trade Practices Act doesn’t have a statute of limitations, allowing him to recall ExxonMobil documents from decades ago – even though the company already turned over 3 million documents as part of the New York Attorney General’s failed investigation.

Connecticut was also mentioned in the Pay Up Climate Polluters report, “Climate Costs 2040,” which seems to be a target list of cities to carry out potential litigation, as recent plaintiffs Hoboken, N.J. and Charleston, S.C. were also featured. Pay Up Climate Polluters is a campaign that promotes climate litigation that is sponsored by Center for Climate Integrity, which in turn is a project of Institute for Governance and Sustainable Development (IGSD), which, ironically enough, is paying for the outside counsel in Hoboken’s lawsuit.

IGSD also receives money from a network of Rockefeller groups, which with the help of wealthy donors and activists, have manufactured the entire climate litigation campaign. Tong even filed an amicus brief in support of the climate lawsuit filed by San Francisco and Oakland.

During his press conference, Tong even thanked 350.org and The Sunrise Movement, two other Rockefeller-supported groups that support and actively promote climate litigation.

Conclusion

The lawsuit filed by the Connecticut Attorney General is just the latest case to emerge in the broader, national campaign being pushed by weather donors and activist groups. But while a new lawsuit generates new headlines, it does nothing to change the fact that it’s based on another rehashing of the debunked “Exxon Knew” theory that failed in New York and will do nothing to address climate change.

Background from Previous Post:  Climate Lawsuit Dominos

Posted to Energy March 05, 2020 by Curt Levey writes at InsideSources Climate Change Lawsuits Collapsing Like Dominoes.  Excerpts in italics with my bolds.

Climate change activists went to court in California recently trying to halt a long losing streak in their quest to punish energy companies for aiding and abetting the world’s consumption of fossil fuels.

A handful of California cities — big consumers of fossil fuels themselves — asked the U.S. Court of Appeals for the Ninth Circuit to reverse the predictable dismissal of their public nuisance lawsuit seeking to pin the entire blame for global warming on five energy producers: BP, Chevron, ConocoPhillips, ExxonMobil and Royal Dutch Shell.

The cities hope to soak the companies for billions of dollars of damages, which they claim they’ll use to build sea walls, better sewer systems and the like in anticipation of rising seas and extreme weather that might result from climate change.

But no plaintiff has ever succeeded in bringing a public nuisance lawsuit based on climate change.

To the contrary, these lawsuits are beginning to collapse like dominoes as courts remind the plaintiffs that it is the legislative and executive branches — not the judicial branch — that have the authority and expertise to determine climate policy.

Climate change activists should have gotten the message in 2011 when the Supreme Court ruled against eight states and other plaintiffs who brought nuisance claims for the greenhouse gas emissions produced by electric power plants.

The Court ruled unanimously in American Electric Power v. Connecticut that the federal Clean Air Act, under which such emissions are subject to EPA regulation, preempts such lawsuits.

The Justices emphasized that “Congress designated an expert agency, here, EPA … [that] is surely better equipped to do the job than individual district judges issuing ad hoc, case-by-case injunctions” and better able to weigh “the environmental benefit potentially achievable [against] our Nation’s energy needs and the possibility of economic disruption.”

The Court noted that this was true of “questions of national or international policy” in general, reminding us why the larger trend of misusing public nuisance lawsuits is a problem.

The California cities, led by Oakland and San Francisco, tried to get around this Supreme Court precedent by focusing on the international nature of the emissions at issue.

But that approach backfired in 2018 when federal district judge William Alsup concluded that a worldwide problem “deserves a solution on a more vast scale than can be supplied by a district judge or jury in a public nuisance case.” Alsup, a liberal Clinton appointee, noted that “Without [fossil] fuels, virtually all of our monumental progress would have been impossible.”

In July 2018, a federal judge in Manhattan tossed out a nearly identical lawsuit by New York City on the same grounds. The city is appealing.

Meanwhile, climate lawfare is also being waged against energy companies by Rhode Island and a number of municipal governments, including Baltimore. Like the other failed cases, these governments seek billions of dollars.

Adding to the string of defeats was the Ninth Circuit’s rejection last month of the so-called “children’s” climate suit, which took a somewhat different approach by pitting a bunch of child plaintiffs against the federal government.

The children alleged “psychological harms, others impairment to recreational interests, others exacerbated medical conditions, and others damage to property” and sought an injunction forcing the executive branch to phase out fossil fuel emissions.

Judge Andrew Hurwitz, an Obama appointee, wrote for the majority that “such relief is beyond our constitutional power.” The case for redress, he said, “must be presented to the political branches of government.”

Yet another creative, if disingenuous, litigation strategy was attempted by New York State’s attorney general, who sued ExxonMobil for allegedly deceiving investors about the impact of future climate change regulations on profits by keeping two sets of books.

That lawsuit went down in flames in December when a New York court ruled that the state failed to prove any “material misstatements” to investors.

All these lawsuits fail because they are grounded in politics, virtue signaling and — in most cases — the hope of collecting billions from energy producers, rather than in sound legal theories or a genuine strategy for fighting climate change.

But in the unlikely event these plaintiffs prevail, would they use their billion dollar windfalls to help society cope with global warming?

It’s unlikely if past history is any indication.

State and local governments that have won large damage awards in successful non-climate-related public nuisance lawsuits — tobacco litigation is the most famous example — have notoriously blown most of the money on spending binges unrelated to the original lawsuit or on backfilling irresponsible budget deficits.

The question of what would happen to the award money will likely remain academic. Even sympathetic judges have repeatedly refused to be roped by weak public nuisance or other contorted legal theories into addressing a national or international policy issue — climate change — that is clearly better left to elected officials.

Like anything built on an unsound foundation, these climate lawsuits will continue to collapse.

Curt Levey is a constitutional law attorney and president of the Committee for Justice, a nonprofit organization dedicated to preserving the rule of law.

Update March 10

Honolulu joins the domino lineup with its own MeToo lawsuit: Honolulu Sues Petroleum Companies For Climate Change Damages to City

Honolulu city officials, lashing out at the fossil fuel industry in a climate change lawsuit filed Monday, accused oil producers of concealing the dangers that greenhouse gas emissions from petroleum products would create, while reaping billions in profits.

The lawsuit, against eight oil companies, says climate change already is having damaging effects on the city’s coastline, and lays out a litany of catastrophic public nuisances—including sea level rise, heat waves, flooding and drought caused by the burning of fossil fuels—that are costing the city billions, and putting its residents and property at risk.

“We are seeing in real time coastal erosion and the consequences,” Josh Stanbro, chief resilience officer and executive director for the City and County of Honolulu Office of Climate Change, Sustainability and Resiliency, told InsideClimate News. “It’s an existential threat for what the future looks like for islanders.”  [ I wonder if Stanbro’s salary matches the length of his job title, or if it is contingent on winning the case.]

Why Wu Flu Virus Looks Man-made

A virologist who fled China after studying the early outbreak of COVID-19 has published a new report claiming the coronavirus likely came from a lab.  This adds to the analysis done by Dr. Luc Montagnier earlier this year, and summarized in a previous post reprinted later on.  Dr. Yan was interviewed on Fox News, and YouTube has now blocked the video.

If you are wondering why Big Tech is censoring information unflattering to China, see Lee Smith’s Tablet article America’s China Class Launches a New War Against Trump  The corporate, tech, and media elites will not allow the president to come between them and Chinese money

Doctor Li-Meng Yan, a scientist who studied some of the available data on COVID-19 has published her claims on Zenodo, an open access digital platform. She wrote that she believed COVID-19 could have been “conveniently created” within a lab setting over a period of just six months, and “SARS-CoV-2 shows biological characteristics that are inconsistent with a naturally occurring, zoonotic virus”.

The paper by Yan, Li-Meng; Kang, Shu; Guan, Jie; Hu, Shanchang  is Unusual Features of the SARS-CoV-2 Genome Suggesting Sophisticated Laboratory Modification Rather Than Natural Evolution and Delineation of Its Probable Synthetic Route.  Excerpts in italics with my bolds.

Overview

The natural origin theory, although widely accepted, lacks substantial support. The alternative theory that the virus may have come from a research laboratory is, however, strictly censored on peer-reviewed scientific journals. Nonetheless, SARS-CoV-2 shows biological characteristics that are inconsistent with a naturally occurring, zoonotic virus. In this report, we describe the genomic, structural, medical, and literature evidence, which, when considered together, strongly contradicts the natural origin theory.

The evidence shows that SARS-CoV-2 should be a laboratory product created by using bat coronaviruses ZC45 and/or ZXC21 as a template and/or backbone.

Contents

Consistent with this notion, genomic, structural, and literature evidence also suggest a non-natural origin of SARS-CoV-2. In addition, abundant literature indicates that gain-of-function research has long advanced to the stage where viral genomes can be precisely engineered and manipulated to enable the creation of novel coronaviruses possessing unique properties. In this report, we present such evidence and the associated analyses.

Part 1 of the report describes the genomic and structural features of SARS-CoV-2, the presence of which could be consistent with the theory that the virus is a product of laboratory modification beyond what could be afforded by simple serial viral passage. Part 2 of the report describes a highly probable pathway for the laboratory creation of SARS-CoV-2, key steps of which are supported by evidence present in the viral genome. Importantly, part 2 should be viewed as a demonstration of how SARS-CoV-2 could be conveniently created in a laboratory in a short period of time using available materials and well-documented techniques. This report is produced by a team of experienced scientists using our combined expertise in virology, molecular biology, structural biology, computational biology, vaccine development, and medicine.

We present three lines of evidence to support our contention that laboratory manipulation is part of the history of SARS-CoV-2:

i. The genomic sequence of SARS-CoV-2 is suspiciously similar to that of a bat coronavirus discovered by military laboratories in the Third Military Medical University (Chongqing, China) and the Research Institute for Medicine of Nanjing Command (Nanjing, China).

ii. The receptor-binding motif (RBM) within the Spike protein of SARS-CoV-2, which determines the host specificity of the virus, resembles that of SARS-CoV from the 2003 epidemic in a suspicious manner. Genomic evidence suggests that the RBM has been genetically manipulated.

iii. SARS-CoV-2 contains a unique furin-cleavage site in its Spike protein, which is known to greatly enhance viral infectivity and cell tropism. Yet, this cleavage site is completely absent in this particular class of coronaviruses found in nature. In addition, rare codons associated with this additional sequence suggest the strong possibility that this furin-cleavage site is not the product of natural evolution and could have been inserted into the SARS-CoV-2 genome artificially by techniques other than simple serial passage or multi-strain recombination events inside co-infected tissue cultures or animals.

Background from Previous post June 30, 2020:  Pandemic Update: Virus Weaker, HCQ Stronger

In past weeks there have been anecdotal reports from frontline doctors that patients who would have been flattened fighting off SARS CV2 in April are now sitting up and recovering in a few days. We have also the statistical evidence in the US and Sweden, as two examples, that case numbers are rising while Covid deaths continue declining. One explanation is that the new cases are younger people who have been released from lockdown (in US) with stronger immune systems. But it may also be that the virus itself is losing potency.

In the past I have noticed theories about the origin of the virus, and what makes it “novel.” But when the scientist who identified HIV weighs in, I pay particular attention. The Coronavirus Is Man Made According to Luc Montagnier the Man Who Discovered HIV. Excerpts in italics with my bolds.

Contrary to the narrative that is being pushed by the mainstream that the COVID 19 virus was the result of a natural mutation and that it was transmitted to humans from bats via pangolins, Dr Luc Montagnier the man who discovered the HIV virus back in 1983 disagrees and is saying that the virus was man made.

Professor Luc Montagnier, 2008 Nobel Prize winner for Medicine, claims that SARS-CoV-2 is a manipulated virus that was accidentally released from a laboratory in Wuhan, China. Chinese researchers are said to have used coronaviruses in their work to develop an AIDS vaccine. HIV RNA fragments are believed to have been found in the SARS-CoV-2 genome.

“With my colleague, bio-mathematician Jean-Claude Perez, we carefully analyzed the description of the genome of this RNA virus,” explains Luc Montagnier, interviewed by Dr Jean-François Lemoine for the daily podcast at Pourquoi Docteur, adding that others have already explored this avenue: Indian researchers have already tried to publish the results of the analyses that showed that this coronavirus genome contained sequences of another virus, … the HIV virus (AIDS virus), but they were forced to withdraw their findings as the pressure from the mainstream was too great.

To insert an HIV sequence into this genome requires molecular tools

In a challenging question Dr Jean-François Lemoine inferred that the coronavirus under investigation may have come from a patient who is otherwise infected with HIV. No, “says Luc Montagnier,” in order to insert an HIV sequence into this genome, molecular tools are needed, and that can only be done in a laboratory.

According to the 2008 Nobel Prize for Medicine, a plausible explanation would be an accident in the Wuhan laboratory. He also added that the purpose of this work was the search for an AIDS vaccine.

In any case, this thesis, defended by Professor Luc Montagnier, has a positive turn.

According to him, the altered elements of this virus are eliminated as it spreads: “Nature does not accept any molecular tinkering, it will eliminate these unnatural changes and even if nothing is done, things will get better, but unfortunately after many deaths.”

This is enough to feed some heated debates! So much so that Professor Montagnier’s statements could also place him in the category of “conspiracy theorists”: “Conspirators are the opposite camp, hiding the truth,” he replies, without wanting to accuse anyone, but hoping that the Chinese will admit to what he believes happened in their laboratory.

To entice a confession from the Chinese he used the example of Iran which after taking full responsibility for accidentally hitting a Ukrainian plane was able to earn the respect of the global community. Hopefully the Chinese will do the right thing he adds. “In any case, the truth always comes out, it is up to the Chinese government to take responsibility.”

Implications: Leaving aside the geopolitics, this theory also explains why the virus weakens when mutations lose the unnatural pieces added in the lab. Since this is an RNA (not DNA) sequence mutations are slower, but inevitable. If correct, this theory works against fears of a second wave of infections. It also gives an unintended benefit from past lockdowns and shutdowns, slowing the rate of infections while the virus degrades itself.

Arctic Ice Bottoms at 3.7 Wadhams

The animation above shows Arctic ice extents from Sept. 1 to 16, 2020.  On the left are the Russian shelf seas already ice-free, and the Central Arctic retreating as well. Bottom left is Beaufort Sea losing ice. In the last week CAA in the center starts refreezing, and just above it Baffin Bay starts to add ice back.  At the top right Greenland Sea starts to refreeze.

Prof. Peter Wadhams made multiple predictions of an ice-free Arctic (extent as low as 1M km2), most recently to happen in 2015.  Thus was born the metric: 1 Wadham = 1M km2 Arctic ice extent. The details are provided on 2020 minimum below.  Though there could be a dip lower in the next few days, the record shows a daily minimum of 3.7M km2 on September 11 (MASIE) and September 13 (SII).  While BCE (Beaufort, Chukchi, East Siberian seas) may lose more ice,  gains have appeared on the Canadian side: CAA, Baffin Bay and Greenland Sea. So 3.7 Wadhams may well hold up as the daily low this year.  Note that day 260, September 16, 2020, is the date for the lowest annual extent averaged over the last 13-years.

The discussion later on refers to the September monthly average extent serving as the usual climate metric.  That stands presently at 3.9M km2 for MASIE and 3.8M km2 for SII, with both expected to rise slightly by month end as ice extent typically recovers.

The melting season this year showed ice extents briefly near the 13-year average on day 241, then dropping rapidly to go below all other years except 2012.  That year was exceptional due to the 2012 Great Arctic August Cyclone that pushed drift ice around producing a new record minimum.  The anomaly this year was the high pressure ridge persisting over Siberia producing an extremely hot summer there.  This resulted in early melting of the Russian shelf seas along with bordering parts of the Central Arctic.

 

As discussed below, the daily minimum on average occurs on day 260, but a given year may be earlier or later.  The 2020 extent began to flatten from day 248 onward in SII (orange) while MASIE showed stabilizing from day 252 with an upward bump in recent days.  Both lines are drawing near 2019 and 2007 while departing from 2012. The table below shows the distribution of ice in the various regions of the Arctic Ocean.

Region 2020260 Day 260 Average 2020-Ave. 2012260 2020-2012
 (0) Northern_Hemisphere 3770950 4483942 -712991 3398785 372165
 (1) Beaufort_Sea 503701 471897 31804 214206 289495
 (2) Chukchi_Sea 49625 143329 -93704 52708 -3084
 (3) East_Siberian_Sea 97749 278150 -180400 47293 50456
 (4) Laptev_Sea 0 124811 -124811 21509 -21509
 (5) Kara_Sea 12670 19162 -6492 0 12670
 (6) Barents_Sea 0 20787 -20787 0 0
 (7) Greenland_Sea 258624 191964 66660 253368 5256
 (8) Baffin_Bay_Gulf_of_St._Lawrence 20839 31394 -10555 12695 8144
 (9) Canadian_Archipelago 328324 269950 58374 154875 173449
 (10) Hudson_Bay 104 6195 -6092 3863 -3759
 (11) Central_Arctic 2498209 2925271 -427062 2637199 -138990

The extent numbers show that this year’s melt is dominated by the surprisingly hot Siberian summer, leading to major deficits in all the Eurasian shelf seas–East Siberian, Laptev, Kara.  As well, the bordering parts of the Central Arctic show a sizeable deficit to average. The main surpluses to average and to 2012 are Beaufort, Greenland Sea and CAA. Overall 2020 is 713k km2 below the 13-year average a deficit of 16%.

Background from Previous Post Outlook for Arctic Ice Minimum

The annual competition between ice and water in the Arctic ocean is approaching the maximum for water, which typically occurs mid September.  After that, diminishing energy from the slowly setting sun allows oceanic cooling causing ice to regenerate. Those interested in the dynamics of Arctic sea ice can read numerous posts here.  Note that for climate purposes the annual minimum is measured by the September monthly average ice extent, since the daily extents vary and will go briefly lower on or about day 260.

The Bigger Picture 

We are close to the annual Arctic ice extent minimum, which typically occurs on or about day 260 (mid September). Some take any year’s slightly lower minimum as proof that Arctic ice is dying, but the image above shows the Arctic heart is beating clear and strong.

Over this decade, the Arctic ice minimum has not declined, but since 2007 looks like fluctuations around a plateau. By mid-September, all the peripheral seas have turned to water, and the residual ice shows up in a few places. The table below indicates where we can expect to find ice this September. Numbers are area units of Mkm2 (millions of square kilometers).

Day 260 13 year
Arctic Regions 2007 2010 2012 2014 2015 2016 2017 2018 2019 Average
Central Arctic Sea 2.67 3.16 2.64 2.98 2.93 2.92 3.07 2.91 2.97 2.93
BCE 0.50 1.08 0.31 1.38 0.89 0.52 0.84 1.16 0.46 0.89
LKB 0.29 0.24 0.02 0.19 0.05 0.28 0.26 0.02 0.11 0.16
Greenland & CAA 0.56 0.41 0.41 0.55 0.46 0.45 0.52 0.41 0.36 0.46
B&H Bays 0.03 0.03 0.02 0.02 0.10 0.03 0.07 0.05 0.01 0.04
NH Total 4.05 4.91 3.40 5.13 4.44 4.20 4.76 4.56 3.91 4.48

The table includes three early years of note along with the last 6 years compared to the 13 year average for five contiguous arctic regions. BCE (Beaufort, Chukchi and East Siberian) on the Asian side are quite variable as the largest source of ice other than the Central Arctic itself.   Greenland Sea and CAA (Canadian Arctic Archipelago) together hold almost 0.5M km2 of ice at annual minimum, fairly consistently.  LKB are the European seas of Laptev, Kara and Barents, a smaller source of ice, but a difference maker some years, as Laptev was in 2016.  Baffin and Hudson Bays are inconsequential as of day 260.

For context, note that the average maximum has been 15M, so on average the extent shrinks to 30% of the March high before growing back the following winter.  In this context, it is foolhardy to project any summer minimum forward to proclaim the end of Arctic ice.

Resources:  Climate Compilation II Arctic Sea Ice

Trump Did Listen to Pandemic Experts. They just failed him.

President Trump, accompanied by, from left, Anthony S. Fauci, Vice President Pence and Robert Redfield, reacts to a question during a news conference on the coronavirus in the press briefing room at the White House in Washington on Feb. 29. (Andrew Harnik/AP)

Marc Thiessen writes at Washington Post Trump did listen to experts on the pandemic. They just failed him. Excerpts in italics with my bolds.

A narrative has taken hold since the release of Bob Woodward’s latest book that President Trump was told in late January that the coronavirus was spreading across America at pandemic rates but ignored the dire warnings of government experts.

That narrative is wrong and unfair.

The truth is that during the crucial early weeks of the pandemic, the government’s public health leaders assured Trump that the virus was not spreading in communities in the United States. They gave him bad intelligence because of two catastrophic failures: First, they relied on the flu surveillance system that failed to detect the rapid spread of covid-19; and second, they bungled the development of a diagnostic test for covid-19 that would have shown they were wrong, barred commercial labs from developing tests, and limited tests to people who had traveled to foreign hot spots or had contact with someone with a confirmed case.

As a result, according to former Food and Drug Administration chief Scott Gottlieb, they were “situationally blind” to the spread of the virus.

In an interview Sunday on CBS News’s “Face the Nation,” Gottlieb said officials at the Department of Health and Human Services “over-relied on a surveillance system that was built for flu and not for coronavirus without recognizing that it wasn’t going to be as sensitive at detecting coronavirus spread as it was for flu because the two viruses spread very differently.” Officials were looking for a spike in patients presenting with flu-like respiratory symptoms at hospitals. But there was a lag of a week or more in reporting data, and because many of those infected with the novel coronavirus didn’t develop symptoms, or did not present with respiratory illness, they were not picked up by this monitoring. As a result, officials concluded “therefore, coronavirus must not be spreading.”

They also failed to detect the spread, Gottlieb said, because for six weeks, they “had no diagnostic tests in the field to screen people.” That is because the FDA and HHS refused to allow private and academic labs to get into the testing game with covid-19 tests of their own. The FDA issued only a single emergency authorization to the Centers for Disease Control and Prevention — and then scientists at the CDC contaminated the only approved test kits with sloppy lab practices, rendering them ineffective. The results were disastrous.

How badly did the system fail? Researchers at the University of Notre Dame found that only 1,514 cases and 39 deaths had been officially reported by early March, when in truth more than 100,000 people were already infected. Because of this failure, Gottlieb said that as covid-19 was spreading, CDC officials were “telling the coronavirus task force … that there was no spread of coronavirus in the United States,” adding “They were adamant.”

It is often noted that on Feb. 25, Nancy Messonnier, director of the CDC’s National Center for Immunization and Respiratory Diseases, pointed to the spread of the virus abroad and said, “It’s not a question of if this will happen but when this will happen and how many people in this country will have severe illnesses” — and Trump reportedly nearly fired her. But Messonnier also said in that same interview, “To date, our containment strategies have been largely successful. As a result, we have very few cases in the United States and no spread in the community.” She added that the administration’s “proactive approach of containment and mitigation will delay the emergence of community spread in the United States while simultaneously reducing its ultimate impact” when it arrives. She had no idea it already had.

On Feb. 20, Gottlieb co-authored a Wall Street Journal op-ed raising concerns that infections were more widespread than CDC numbers showed. The next day, on Feb. 21, Anthony S. Fauci said in a CNBC interview that he was confident this was not the case. “Certainly, it’s a possibility,” Fauci said, “but it is extraordinarily unlikely.” He explained that if there were infected people in the United States who were not identified, isolated and traced, “you would have almost an exponential spread of an infection of which we are all looking out for. We have not seen that, so it is extremely unlikely that it is happening.” Fauci said the “pattern of what we’re seeing argues against infections that we’re missing.”

It was not until Feb. 26 that the first possible case of suspected community spread was reported. Even then, senior health officials played down the danger. On Feb. 29, CDC director Robert Redfield said at a White House press briefing: “The American public needs to go on with their normal lives. Okay? We’re continuing to aggressively investigate these new community links. … But at this stage, again, the risk is low.” It was not until early March that the experts realized just how disastrously wrong they had been.

So, when Trump told the American people on Feb. 25 that “the coronavirus … is very well under control in our country. We have very few people with it,” he was not lying or playing down more dire information he was being told privately. He was repeating exactly what experts such as Fauci, Redfield and Messonnier were telling him.

Trump did make serious errors of his own during this early period. On deputy national security adviser Matthew Pottinger’s advice, he barred travel by non-U.S. citizens from China on Jan. 31. But he did not also shut down travel from much of Europe, as Pottinger recommended, until March 11 — almost six weeks later — because of objections from his economic advisers. The outbreak in New York, the worst of the pandemic, was seeded by travelers from Italy.

But the main reason we were not able to contain the virus is that for six critical weeks, the health experts told the president covid-19 was not spreading in U.S. communities when it was, in fact, spreading like wildfire. They were wrong. The experts failed the president — and the country.

Footnote:  For What President Trump has done to fight the Chinese virus, see:

Trump DPA Initiatives Against China Virus

 

Ocean Cooling Pauses August 2020

The best context for understanding decadal temperature changes comes from the world’s sea surface temperatures (SST), for several reasons:

  • The ocean covers 71% of the globe and drives average temperatures;
  • SSTs have a constant water content, (unlike air temperatures), so give a better reading of heat content variations;
  • A major El Nino was the dominant climate feature in recent years.

HadSST is generally regarded as the best of the global SST data sets, and so the temperature story here comes from that source, the latest version being HadSST3.  More on what distinguishes HadSST3 from other SST products at the end.

The Current Context

The cool 2020 Spring was not just your local experience, it’s the result of Earth’s ocean cooling off after last summer’s warming in the Northern Hemisphere.  The chart below shows SST monthly anomalies as reported in HadSST3 starting in 2015 through August 2020. After three straight months of cooling led by the tropics and SH, August anomalies are up slightly.


A global cooling pattern is seen clearly in the Tropics since its peak in 2016, joined by NH and SH cycling downward since 2016.  In 2019 all regions had been converging to reach nearly the same value in April.

Then  NH rose exceptionally by almost 0.5C over the four summer months, in August exceeding previous summer peaks in NH since 2015.  In the 4 succeeding months, that warm NH pulse reversed sharply.  Now again NH temps are warming to a 2020 summer peak, matching 2019.  This had been offset by sharp cooling in the Tropics and SH, which instead warmed slightly last month. Thus the Global anomaly steadily decreased since March, then rose, now presently matching last summer.

Note that higher temps in 2015 and 2016 were first of all due to a sharp rise in Tropical SST, beginning in March 2015, peaking in January 2016, and steadily declining back below its beginning level. Secondly, the Northern Hemisphere added three bumps on the shoulders of Tropical warming, with peaks in August of each year.  A fourth NH bump was lower and peaked in September 2018.  As noted above, a fifth peak in August 2019 and a sixth August 2020 exceeded the four previous upward bumps in NH.

And as before, note that the global release of heat was not dramatic, due to the Southern Hemisphere offsetting the Northern one.  The major difference between now and 2015-2016 is the absence of Tropical warming driving the SSTs, along with SH anomalies reaching nearly the lowest in this period.

A longer view of SSTs

The graph below  is noisy, but the density is needed to see the seasonal patterns in the oceanic fluctuations.  Previous posts focused on the rise and fall of the last El Nino starting in 2015.  This post adds a longer view, encompassing the significant 1998 El Nino and since.  The color schemes are retained for Global, Tropics, NH and SH anomalies.  Despite the longer time frame, I have kept the monthly data (rather than yearly averages) because of interesting shifts between January and July.

1995 is a reasonable (ENSO neutral) starting point prior to the first El Nino.  The sharp Tropical rise peaking in 1998 is dominant in the record, starting Jan. ’97 to pull up SSTs uniformly before returning to the same level Jan. ’99.  For the next 2 years, the Tropics stayed down, and the world’s oceans held steady around 0.2C above 1961 to 1990 average.

Then comes a steady rise over two years to a lesser peak Jan. 2003, but again uniformly pulling all oceans up around 0.4C.  Something changes at this point, with more hemispheric divergence than before. Over the 4 years until Jan 2007, the Tropics go through ups and downs, NH a series of ups and SH mostly downs.  As a result the Global average fluctuates around that same 0.4C, which also turns out to be the average for the entire record since 1995.

2007 stands out with a sharp drop in temperatures so that Jan.08 matches the low in Jan. ’99, but starting from a lower high. The oceans all decline as well, until temps build peaking in 2010.

Now again a different pattern appears.  The Tropics cool sharply to Jan 11, then rise steadily for 4 years to Jan 15, at which point the most recent major El Nino takes off.  But this time in contrast to ’97-’99, the Northern Hemisphere produces peaks every summer pulling up the Global average.  In fact, these NH peaks appear every July starting in 2003, growing stronger to produce 3 massive highs in 2014, 15 and 16.  NH July 2017 was only slightly lower, and a fifth NH peak still lower in Sept. 2018.

The highest summer NH peak came in 2019, only this time the Tropics and SH are offsetting rather adding to the warming. Since 2014 SH has played a moderating role, offsetting the NH warming pulses. Now August 2020 is matching last summer’s unusually high NH SSTs. f(Note: these are high anomalies on top of the highest absolute temps in the NH.)

What to make of all this? The patterns suggest that in addition to El Ninos in the Pacific driving the Tropic SSTs, something else is going on in the NH.  The obvious culprit is the North Atlantic, since I have seen this sort of pulsing before.  After reading some papers by David Dilley, I confirmed his observation of Atlantic pulses into the Arctic every 8 to 10 years.

But the peaks coming nearly every summer in HadSST require a different picture.  Let’s look at August, the hottest month in the North Atlantic from the Kaplan dataset.
The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N. The graph shows August warming began after 1992 up to 1998, with a series of matching years since, including 2020.  Because the N. Atlantic has partnered with the Pacific ENSO recently, let’s take a closer look at some AMO years in the last 2 decades.
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line is at the bottom of all these tracks. The black line shows that 2020 began slightly warm, then set records for 3 months. then dropped below 2016 and 2017, and now is matching 2016.

Summary

The oceans are driving the warming this century.  SSTs took a step up with the 1998 El Nino and have stayed there with help from the North Atlantic, and more recently the Pacific northern “Blob.”  The ocean surfaces are releasing a lot of energy, warming the air, but eventually will have a cooling effect.  The decline after 1937 was rapid by comparison, so one wonders: How long can the oceans keep this up? If the pattern of recent years continues, NH SST anomalies may rise slightly in coming months, but once again, ENSO which has weakened will probably determine the outcome.

Footnote: Why Rely on HadSST3

HadSST3 is distinguished from other SST products because HadCRU (Hadley Climatic Research Unit) does not engage in SST interpolation, i.e. infilling estimated anomalies into grid cells lacking sufficient sampling in a given month. From reading the documentation and from queries to Met Office, this is their procedure.

HadSST3 imports data from gridcells containing ocean, excluding land cells. From past records, they have calculated daily and monthly average readings for each grid cell for the period 1961 to 1990. Those temperatures form the baseline from which anomalies are calculated.

In a given month, each gridcell with sufficient sampling is averaged for the month and then the baseline value for that cell and that month is subtracted, resulting in the monthly anomaly for that cell. All cells with monthly anomalies are averaged to produce global, hemispheric and tropical anomalies for the month, based on the cells in those locations. For example, Tropics averages include ocean grid cells lying between latitudes 20N and 20S.

Gridcells lacking sufficient sampling that month are left out of the averaging, and the uncertainty from such missing data is estimated. IMO that is more reasonable than inventing data to infill. And it seems that the Global Drifter Array displayed in the top image is providing more uniform coverage of the oceans than in the past.

uss-pearl-harbor-deploys-global-drifter-buoys-in-pacific-ocean

USS Pearl Harbor deploys Global Drifter Buoys in Pacific Ocean

Oil Demand No End in Sight

Your next car? NURPHOTO VIA GETTY IMAGES

Michael Lynch writes at Forbes  Peak Oil Demand! Again?  Excerpts in italics with my bolds.

Amid stubbornly low prices and lackluster demand we’re now seeing, on cue, a new round of predictions that oil demand has already or is about to peak (including even scenarios published by BP). These cannot be dismissed out of hand — as the peak oil supply arguments could, inasmuch as they were either based on bad math or represented assumptions that the industry couldn’t continue overcoming its age-old problems like depletion. (See my book The Peak Oil Scare if you want the full treatment.)

Now, the news is highlighting various predictions that the pandemic will accelerate the point at which global oil demand peaks, which is certainly much more sexy than business as usual. When groups like Greenpeace or the Sierra Club predict or advocate for peak oil demand, it doesn’t make much news: dog bites man. But, as the newspeople say, when man bites dog it is news. Thus, when oil company execs seem to believe peak oil demand is near, you get headlines like, “BP Says the Era of Oil Demand-Growth is Over,” The Guardian newspaper proclaiming that “Even the Oil Giants Can Now Foresee the End of the Oil Age,” and Reuters in July: “End game for oil? OPEC prepares for an age of dwindling demand.”

Anyone who is familiar with the oil industry knows that a peak in oil production has been predicted many times throughout the decades, never to come true (or deter future predictions of same). But few realize that the end of the industry has been repeatedly predicted as well, including both the demise of an old-fashioned business model, but also replacement of petroleum by newer, better technologies or fuels.

Until the 1970s, few saw an end to the oil business. The automobile boom created a seemingly insatiable demand for oil, one which has only slowed when prices rose and/or economic growth stalled, neither of which has ever proved permanent.

Yet there have been three particular apocalyptic threads put forward for the oil industry: the industry would spiral into decline, demand would peak, and/or a new fuel or technology would displace petroleum.

The oil industry’s business model was challenged as far back as 1977, when Mobil XOM +1.3% CEO Rawleigh Warner tried to diversify out of the oil business for fear that those who didn’t would “go the way of the buggy-whip makers.” Similarly, Mike Bowlin, ARCO’s CEO, declared in 1999 “We’ve embarked on the beginning of the last days of oil.” Enron’s Jeff Skilling (whatever happened to him?) said he “had little use for anything that smacked of a traditional energy company — calling companies like Exxon Mobil ‘dinosaurs’”

Vanishing demand has been another common motif for prognosticators, especially when high prices caused demand to slump. Exxon CEO Rex Tillerson (whatever happened to him?) thought in 2009, when gasoline prices were $4/gallon, that gasoline demand had peaked in 2007. (The figure below shows how that worked out.)

Gasoline demand peaks then recovers U.S. Gasoline Demand (tb/d) THE AUTHOR FROM EIA DATA.

Sheikh Yamani, the former Saudi Oil Minister, warned in 2000 that in thirty years there would be “no buyers” for oil, because fuel cell technology would be commercial by the end of that decade. (From 2000, oil demand increased by 20 mb/d before the pandemic.) The fabled Economist magazine agreed with Yamani in 2003, “Finally, advances in technology are beginning to offer a way for economies, especially those of the developed world, to diversify their supplies of energy and reduce their demand for petroleum…Hydrogen fuel cells and other ways of storing and distributing energy are no longer a distant dream but a foreseeable reality.”

They might have been echoing William Ford, CEO of Ford Motor Company F +0.6%, who said in 2000, “Fuel cells could be the predominant automotive power source in 25 years.” Twenty years later, they are insignificant.

Amory Lovins, whose has probably received more awards than Tom Hanks, has long argued that extremely efficient (and expensive) cars would reduce gasoline demand substantially, including in his (and co-authors) Winning the Oil Endgame, which argued that a combination of efficiency and celluslosic ethanol could replace our imports from the Persian Gulf (then about 2.5 mb/d). (They’ve been replaced, but by shale oil, and demand was unchanged since their prediction.)

He was hardly alone, with Richard Lugar and James Woolsey in a 1999 Foreign Affairs article calling cellulosic ethanol “The New Petroleum.” Perhaps they relied on a 1996 Atlantic article by Charles Curtis and Joseph Room (“Mideast Oil Forever”) which argued that cellulosic ethanol should see its cost fall to about $1/gallon (adjusted for inflation). (In 2017, the National Renewable Energy Laboratory put the cost at $5/gallon.)

One of my persistent themes has been that too much writing is not based on rigorous analysis but superficial ideas, a few anecdotes and footnotes, supposedly supporting Herculean changes.

(See Tom Nichols The Death of Expertise.) Peak oil demand is the flavor of the month and people are rushing to publish predictions, prescriptions, guidelines, and fantastical views of a fantastical future. But petroleum remains by far the fuel of choice in transportation and the pandemic seems unlikely to change that. Sexy should be left for HBO and not energy analysis.

There are many reasons the demand for fossil fuels is strong and growing.  

Footnote:  Shareholder activism against Big Oil is based on a cascade of unlikely suppositions including declining demand and stranded assets.  See: Behind the Alarmist Scene

 

 

Attenborough’s Pandemic Porn

Ross Clark writes at The Spectator What David Attenborough’s ‘Extinction: The Facts’ didn’t tell you.  Excerpts in italics with my bolds.

It was only a matter of time before Covid-19 got swept up into the wider narrative of humans facing impending doom thanks to our abuse of the planet. But one might have expected better of Sir David Attenborough. His latest BBC documentary, Extinction: The Facts, broadcast on Sunday night might as well have been produced by Extinction Rebellion, so determined was it to present an hysterical picture of apocalypse caused by consumerism and capitalism. Just to ram home the point, one contributor, naturalist Robert Watson, spoke of ‘many in the private sector making a huge profit at the expense of the natural world’, seemingly oblivious to the far greater rape of the environment committed by the former Soviet Union and other socialist countries.

But it was the section on Covid-19 which really made the jaw drop. ‘Scientists have even linked the destructive relationship with nature to the emergence of Covid-19,’ we were told. ‘If we carry on like this we will see more epidemics.’ It went on: ‘We’ve seen an increasing rate of pandemic emergencies. We’ve had swine flu, SARS, ebola. We’ve found that we’re behind every single pandemic. One of the most obvious ways we’re making it more likely that a virus would jump [from animals to humans] is that we’re having lots of contacts with animals – wildlife trade is at unprecedented levels.’

It then tried to present two examples of food production – intensive cattle ranching and wildlife markets in China – as part of the same problem.

It is perfectly true that Chinese ‘wet markets’, where many different species are sold and killed alongside each other, have been implicated in SARS and Covid-19, the former involving civets and the latter most likely bats. Breeding poultry and pigs in close proximity has also been suggested as a breeding ground for flu viruses which can then jump to humans.

But these are hardly examples of the mass, intensive agriculture which feeds an increasing proportion of the global population. On the contrary, it is the exact opposite.

It is all those old-fashioned farmyards depicted in children’s books which mixed species and brought humans into close contact with animals. Modern livestock farming, by contrast, involves huge monocultures, bred in environments where infectious disease is very tightly-controlled. An outbreak, say, of swine flu is not going to be tolerated for long in a pig farm in a developed country – though it might well be allowed to spread in a developing country where large numbers of people keep pigs in their back yards. The only way in which most of us come into contact with a farm animal now is when a slab of it is presented to us on a plate.

The idea that we face a terrifying future of infectious disease flies in the face of reality. In developed countries infectious disease has gone from being the main cause of death – especially in children – to being a rarity. Globally, the chances of dying from an infectious disease have plummeted in recent decades. According to the Institute for Health Metrics and Evaluation, the proportion of global deaths caused by communicable disease, maternal and neonatal conditions fell from 46 per cent in 1990 to 28 per cent in 2017.

Covid-19 will in no way reverse this: so far, it has caused fewer than two per cent of the 56 million deaths which would have been expected this year anyway.

Pandemics, of course, have always been a regular feature of human life. But are novel diseases becoming more commonplace? Well, yes in the sense that we have become better at identifying them – the first virus, after all, was not discovered until 1900, and we have become ever better at isolating and identifying them.

Little over a century ago, we would have had no idea what Covid-19 was – it might possibly have acquired a name, maybe ‘coughing disease’, but we would have had no real idea whether it was novel or not. A study by Brown university in 2014, published in the Journal of the Royal Society, found that there has been a rise in the number of outbreaks of novel infectious diseases since 1980, but also that there has been a decline in the numbers of people being affected by them. We have become much better at identifying diseases, and much better at controlling them. Covid might have inspired an unprecedented global response, but in historical terms it is a pretty gentle pandemic – even now it has a lower death toll than Hong Kong flu, which hardly affected our lives at all.

It is shocking that the BBC can have allowed such one-sided green propaganda onto our screens without putting issues of human development and the natural world into proper context. But then David Attenborough has become a Greta of the Third Age – no-one dares question what he does because he is a ‘national treasure’. Someone at the BBC needs to pluck up the courage.

 

 

 

TOPICS IN THIS ARTICLE

Why the Left Coast is Still Burning

Update September 13, 2020:  This reprint of a post two years ago shows nothing has changed, except for the worse.

It is often said that truth is the first casualty in the fog of war. That is especially true of the war against fossil fuels and smoke from wildfires. The forests are burning in California, Oregon and Washington, all of them steeped in liberal, progressive and post-modern ideology. There are human reasons that fires are out of control in those places, and it is not due to CO2 emissions. As we shall see, Zinke is right and Brown is wrong. Some truths the media are not telling you in their drive to blame global warming/climate change. Text below is excerpted from sources linked at the end.

1. The World and the US are not burning.

The geographic extent of this summer’s forest fires won’t come close to the aggregate record for the U.S. Far from it. Yes, there are some terrible fires now burning in California, Oregon, and elsewhere, and the total burnt area this summer in the U.S. is likely to exceed the 2017 total. But as the chart above shows, the burnt area in 2017 was less than 20% of the record set way back in 1930. The same is true of the global burnt area, which has declined over many decades.

In fact, this 2006 paper reported the following:

“Analysis of charcoal records in sediments [31] and isotope-ratio records in ice cores [32] suggest that global biomass burning during the past century has been lower than at any time in the past 2000 years. Although the magnitude of the actual differences between pre-industrial and current biomass burning rates may not be as pronounced as suggested by those studies [33], modelling approaches agree with a general decrease of global fire activity at least in past centuries [34]. In spite of this, fire is often quoted as an increasing issue around the globe [11,26–29].”

People have a tendency to exaggerate the significance of current events. Perhaps the youthful can be forgiven for thinking hot summers are a new phenomenon. Incredibly, more “seasoned” folks are often subject to the same fallacies. The fires in California have so impressed climate alarmists that many of them truly believe global warming is the cause of forest fires in recent years, including the confused bureaucrats at Cal Fire, the state’s firefighting agency. Of course, the fires have given fresh fuel to self-interested climate activists and pressure groups, an opportunity for greater exaggeration of an ongoing scare story.

This year, however, and not for the first time, a high-pressure system has been parked over the West, bringing southern winds up the coast along with warmer waters from the south, keeping things warm and dry inland. It’s just weather, though a few arsonists and careless individuals always seem to contribute to the conflagrations. Beyond all that, the impact of a warmer climate on the tendency for biomass to burn is considered ambiguous for realistic climate scenarios.

2. Public forests are no longer managed due to litigation.

According to a 2014 white paper titled; ‘Twenty Years of Forest Service Land Management Litigation’, by Amanda M.A. Miner, Robert W. Malmsheimer, and Denise M. Keele: “This study provides a comprehensive analysis of USDA Forest Service litigation from 1989 to 2008. Using a census and improved analyses, we document the final outcome of the 1125 land management cases filed in federal court. The Forest Service won 53.8% of these cases, lost 23.3%, and settled 22.9%. It won 64.0% of the 669 cases decided by a judge based on cases’ merits. The agency was more likely to lose and settle cases during the last six years; the number of cases initiated during this time varied greatly. The Pacific Northwest region along with the Ninth Circuit Court of Appeals had the most frequent occurrence of cases. Litigants generally challenged vegetative management (e.g. logging) projects, most often by alleging violations of the National Environmental Policy Act and the National Forest Management Act. The results document the continued influence of the legal system on national forest management and describe the complexity of this litigation.”

There is abundant evidence to support the position that when any forest project posits vegetative management in forests as a pretense for a logging operation, salvage or otherwise, litigation is likely to ensue, and in addition to NEPA, the USFS uses the Property Clause to address any potential removal of ‘forest products’. Nevertheless, the USFS currently spends more than 50% of its total budget on wildfire suppression alone; about $1.8 billion annually, while there is scant spending for wildfire prevention.

3. Mega fires are the unnatural result of fire suppression.

And what of the “mega-fires” burning in the West, like the huge Mendocino Complex Fire and last year’s Thomas Fire? Unfortunately, many decades of fire suppression measures — prohibitions on logging, grazing, and controlled burns — have left the forests with too much dead wood and debris, especially on public lands. From the last link:

“Oregon, like much of the western U.S., was ravaged by massive wildfires in the 1930s during the Dust Bowl drought. Megafires were largely contained due to logging and policies to actively manage forests, but there’s been an increasing trend since the 1980s of larger fires.

Active management of the forests and logging kept fires at bay for decades, but that largely ended in the 1980s over concerns too many old growth trees and the northern spotted owl. Lawsuits from environmental groups hamstrung logging and government planners cut back on thinning trees and road maintenance.

[Bob] Zybach [a forester] said Native Americans used controlled burns to manage the landscape in Oregon, Washington and northern California for thousands of years. Tribes would burn up to 1 million acres a year on the west coast to prime the land for hunting and grazing, Zybach’s research has shown.

‘The Indians had lots of big fires, but they were controlled,’ Zybach said. ‘It’s the lack of Indian burning, the lack of grazing’ and other active management techniques that caused fires to become more destructive in the 19th and early 20th centuries before logging operations and forest management techniques got fires under control in the mid-20th Century.”

4. Bad federal forest administration started in 1990s.

Bob Zybach feels like a broken record. Decades ago he warned government officials allowing Oregon’s forests to grow unchecked by proper management would result in catastrophic wildfires.

While some want to blame global warming for the uptick in catastrophic wildfires, Zybach said a change in forest management policies is the main reason Americans are seeing a return to more intense fires, particularly in the Pacific Northwest and California where millions of acres of protected forests stand.

“We knew exactly what would happen if we just walked away,” Zybach, an experienced forester with a PhD in environmental science, told The Daily Caller News Foundation.

Zybach spent two decades as a reforestation contractor before heading to graduate school in the 1990s. Then the Clinton administration in 1994 introduced its plan to protect old growth trees and spotted owls by strictly limiting logging.  Less logging also meant government foresters weren’t doing as much active management of forests — thinnings, prescribed burns and other activities to reduce wildfire risk.

Zybach told Evergreen magazine that year the Clinton administration’s plan for “naturally functioning ecosystems” free of human interference ignored history and would fuel “wildfires reminiscent of the Tillamook burn, the 1910 fires and the Yellowstone fire.”

Between 1952 and 1987, western Oregon saw only one major fire above 10,000 acres. The region’s relatively fire-free streak ended with the Silver Complex Fire of 1987 that burned more than 100,000 acres in the Kalmiopsis Wilderness area, torching rare plants and trees the federal government set aside to protect from human activities. The area has burned several more times since the 1980s.

“Mostly fuels were removed through logging, active management — which they stopped — and grazing,” Zybach said in an interview. “You take away logging, grazing and maintenance, and you get firebombs.”

Now, Oregonians are dealing with 13 wildfires engulfing 185,000 acres. California is battling nine fires scorching more than 577,000 acres, mostly in the northern forested parts of the state managed by federal agencies.

The Mendocino Complex Fire quickly spread to become the largest wildfire in California since the 1930s, engulfing more than 283,000 acres. The previous wildfire record was set by 2017’s Thomas Fire that scorched 281,893 acres in Southern California.

While bad fires still happen on state and private lands, most of the massive blazes happen on or around lands managed by the U.S. Forest Service and other federal agencies, Zybach said. Poor management has turned western forests into “slow-motion time bombs,” he said.

A feller buncher removing small trees that act as fuel ladders and transmit fire into the forest canopy.

5. True environmentalism is not nature love, but nature management.

While wildfires do happen across the country, poor management by western states has served to turn entire regions into tinderboxes. By letting nature play out its course so close to civilization, this is the course California and Oregon have taken.

Many in heartland America and along the Eastern Seaboard often see logging and firelines if they travel to a rural area. This is part and parcel of life outside of the city, where everyone knows that because of a few minor eyesores their houses and communities are safer from the primal fury of wildfires.

In other words, leaving the forests to “nature,” and protecting the endangered Spotted Owl created denser forests––300-400 trees per acre rather than 50-80–– with more fuel from the 129 million diseased and dead trees that create more intense and destructive fires. Yet California spends more than ten times as much money on electric vehicle subsidies ($335 million) than on reducing fuel in a mere 60,000 of 33 million acres of forests ($30 million).

Rancher Ross Frank worries that funding to fight fires in Western communities like Chumstick, Wash., has crowded out important land management work. Rowan Moore Gerety/Northwest Public Radio

Once again, global warming “science” is a camouflage for political ideology and gratifying myths about nature and human interactions with it. On the one hand, progressives seek “crises” that justify more government regulation and intrusion that limit citizen autonomy and increase government power. On the other, well-nourished moderns protected by technology from nature’s cruel indifference to all life can afford to indulge myths that give them psychic gratification at little cost to their daily lives.

As usual, bad cultural ideas lie behind these policies and attitudes. Most important is the modern fantasy that before civilization human beings lived in harmony and balance with nature. The rise of cities and agriculture began the rupture with the environment, “disenchanting” nature and reducing it to mere resources to be exploited for profit. In the early 19thcentury, the growth of science that led to the industrial revolution inspired the Romantic movement to contrast industrialism’s “Satanic mills” and the “shades of the prison-house,” with a superior natural world and its “beauteous forms.” In an increasingly secular age, nature now became the Garden of Eden, and technology and science the signs of the fall that has banished us from the paradise enjoyed by humanity before civilization.

The untouched nature glorified by romantic environmentalism, then, is not our home. Ever since the cave men, humans have altered nature to make it more conducive to human survival and flourishing. After the retreat of the ice sheets changed the environment and animal species on which people had depended for food, humans in at least four different regions of the world independently invented agriculture to better manage the food supply. Nor did the American Indians, for example, live “lightly on the land” in a pristine “forest primeval.” They used fire to shape their environment for their own benefit. They burned forests to clear land for cultivation, to create pathways to control the migration of bison and other game, and to promote the growth of trees more useful for them.

Remaining trees and vegetation on the forest floor are more vigorous after removal of small trees for fuels reduction.

And today we continue to improve cultivation techniques and foods to make them more reliable, abundant, and nutritious, not to mention more various and safe. We have been so successful at managing our food supply that today one person out of ten provides food that used to require nine out of ten, obesity has become the plague of poverty, and famines result from political dysfunction rather than nature.

That’s why untouched nature, the wild forests filled with predators, has not been our home. The cultivated nature improved by our creative minds has. True environmentalism is not nature love, but nature management: applying skill and technique to make nature more useful for humans, at the same time conserving resources so that those who come after us will be able to survive. Managing resources and exploiting them for our benefit without destroying them is how we should approach the natural world. We should not squander resources or degrade them, not because of nature, but because when we do so, we are endangering the well-being of ourselves and future generations.

Conclusion

The annual burnt area from wildfires has declined over the past ninety years both in the U.S. and globally. Even this year’s wildfires are unlikely to come close to the average burn extent of the 1930s. The large wildfires this year are due to a combination of decades of poor forest management along with a weather pattern that has trapped warm, dry air over the West. The contention that global warming has played a causal role in the pattern is balderdash, but apparently that explanation seems plausible to the uninformed, and it is typical of the propaganda put forward by climate change interests.

Sources: 

https://www.frontpagemag.com/fpm/271044/junk-science-and-leftist-folklore-have-set-bruce-thornton

https://www.4baseball.com/westernjournal.com/after-libs-blame-west-coast-fires-on-global-warming-forester-speaks-out/

https://sacredcowchips.net/tag/bob-zybach/

https://www.horsetalk.co.nz/2017/10/13/ecological-imbalance-wildfires-us-rangelands/

http://dailycaller.com/2018/08/08/mismanagement-forests-time-bombs/

Footnote:  So how do you want your forest fires, some small ones now or mega fires later?

New Better and Faster CV Tests

Kevin Pham reports on a breakthrough in coronavirus testing. Excerpts in italics with my bolds.

Another new test for COVID-19 was recently authorized — and this one could be a game-changer.

The Abbot Diagnostics BinaxNOW antigen test is a new point-of-care test that reportedly costs only $5 to administer, delivers results in as little as 15 minutes, and requires no laboratory equipment to perform. That means it can be used in clinics far from commercial labs or without relying on a nearby hospital lab.

That last factor is key. There are other quick COVID-19 tests on the market, but they have all required lab equipment that can be expensive to maintain and operate, and costs can be prohibitive in places that need tests most.

This kind of test is reminiscent of rapid flu tests that are ubiquitous in clinics. They’ll give providers tremendous flexibility in testing for the disease in not just clinics, but with trained and licensed medical professionals, in schools, workplaces, camps, or any other number of places.

So what’s new about this test? Most of the current tests detect viral RNA, the genetic material of SARS-CoV-2. This is a very accurate way of detecting the virus, but it requires lab equipment to break apart the virus and amplify the amount of genetic material to high enough levels for detection.

The BinaxNOW test detects antigens — proteins unique to the virus that are usually detectable whenever there is an active infection.

Abbott says it intends to produce 50 million tests per month starting in October. That’s far more than the number tested in July, when we were breaking new testing records on a daily basis with approximately 23 million tests recorded.

There’s a more important reason to be encouraged by this test coming available.  The viral load is not amplified by the test, so a positive is actually a person needing isolation and treatment.  As explained in a previous post below,  the PCR tests used up to now clutter up the record by showing as positive people with viral loads too low to be sick or to infect others.

Background from Previous Post The Truth About CV Tests

The peoples’ instincts are right, though they have been kept in the dark about this “pandemic” that isn’t.  Responsible citizens are starting to act out their outrage from being victimized by a medical-industrial complex (to update Eisenhower’s warning decades ago).  The truth is, governments are not justified to take away inalienable rights to life, liberty and the pursuit of happiness.  There are several layers of disinformation involved in scaring the public.  This post digs into the CV tests, and why the results don’t mean what the media and officials claim.

For months now, I have been updating the progress in Canada of the CV outbreak.  A previous post later on goes into the details of extracting data on tests, persons testing positive (termed “cases” without regard for illness symptoms) and deaths after testing positive.  Currently, the contagion looks like this.

The graph shows that deaths are less than 5 a day, compared to a daily death rate of 906 in Canada from all causes.  Also significant is the positivity ratio:  the % of persons testing positive out of all persons tested each day.  That % has been fairly steady for months now:  1% positive means 99% of people are not infected. And this is despite more than doubling the rate of testing.

But what does testing positive actually mean?  Herein lies more truth that has been hidden from the public for the sake of an agenda to control free movement and activity.  Background context comes from  Could Rapid Coronavirus Testing Help Life Return To Normal?, an interview at On Point with Dr. Michael Mina.  Excerpts in italics with my bolds. H/T Kip Hansen

A sign displays a new rapid coronavirus test on the new Abbott ID Now machine at a ProHEALTH center in Brooklyn on August 27, 2020 in New York City. (Spencer Platt/Getty Images)

Dr. Michael Mina:

COVID tests can actually be put onto a piece of paper, very much like a pregnancy test. In fact, it’s almost exactly like a pregnancy test. But instead of looking for the hormones that tell if somebody is pregnant, it looks for the virus proteins that are part of SA’s code to virus. And it would be very simple: You’d either swab the front of your nose or you’d take some saliva from under your tongue, for example, and put it onto one of these paper strips, essentially. And if you see a line, it means you’re positive. And if you see no line, it means you are negative, at least for having a high viral load that could be transmissible to other people.

An antigen is one of the proteins in the virus. And so unlike the PCR test, which is what most people who have received a test today have generally received a PCR test. And looking those types of tests look for the genome of the virus to RNA and you could think of RNA the same way that humans have DNA. This virus has RNA. But instead of looking for RNA like the PCR test, these antigen tests look for pieces of the protein. It would be like if I wanted a test to tell me, you know, that somebody was an individual, it would actually look for features like their eyes or their nose. And in this case, it is looking for different parts of the virus. In general, the spike protein or the nuclear capsid, these are two parts of the virus.

The reason that these antigen tests are going to be a little bit less sensitive to detect the virus molecules is because there’s no step that we call an amplification step. One of the things that makes the PCR test that looks for the virus RNA so powerful is that it can take just one molecule, which the sensor on the machine might not be able to detect readily, but then it amplifies that molecule millions and millions of times so that the sensor can see it. These antigen tests, because they’re so simple and so easy to use and just happen on a piece of paper, they don’t have that amplification step right now. And so they require a larger amount of virus in order to be able to detect it. And that’s why I like to think of these types of tests having their primary advantage to detect people with enough virus that they might be transmitting or transmissible to other people.”

The PCR test, provides a simple yes/no answer to the question of whether a patient is infected.
Source: Covid Confusion On PCR Testing: Maybe Most Of Those Positives Are Negatives.

Similar PCR tests for other viruses nearly always offer some measure of the amount of virus. But yes/no isn’t good enough, Mina added. “It’s the amount of virus that should dictate the infected patient’s next steps. “It’s really irresponsible, I think, to [ignore this]” Dr. Mina said, of how contagious an infected patient may be.

We’ve been using one type of data for everything,” Mina said. “for [diagnosing patients], for public health, and for policy decision-making.”

The PCR test amplifies genetic matter from the virus in cycles; the fewer cycles required, the greater the amount of virus, or viral load, in the sample. The greater the viral load, the more likely the patient is to be contagious.

This number of amplification cycles needed to find the virus, called the cycle threshold, is never included in the results sent to doctors and coronavirus patients, although if it was, it could give them an idea of how infectious the patients are.

One solution would be to adjust the cycle threshold used now to decide that a patient is infected. Most tests set the limit at 40, a few at 37. This means that you are positive for the coronavirus if the test process required up to 40 cycles, or 37, to detect the virus.

Any test with a cycle threshold above 35 is too sensitive, Juliet Morrison, a virologist at the University of California, Riverside told the New York Times. “I’m shocked that people would think that 40 could represent a positive,” she said.

A more reasonable cutoff would be 30 to 35, she added. Dr. Mina said he would set the figure at 30, or even less.

Another solution, researchers agree, is to use even more widespread use of Rapid Diagnostic Tests (RDTs) which are much less sensitive and more likely to identify only patients with high levels of virus who are a transmission risk.

Comment:  In other words, when they analyzed the tests that also reported cycle threshold (CT), they found that 85 to 90 percent were above 30. According to Dr. Mina a CT of 37 is 100 times too sensitive (7 cycles too much, 2^7 = 128) and a CT of 40 is 1,000 times too sensitive (10 cycles too much, 2^10 = 1024). Based on their sample of tests that also reported CT, as few as 10 percent of people with positive PCR tests actually have an active COVID-19 infection. Which is a lot less than reported.

Here is a graph showing how this applies to Canada.

It is evident that increased testing has resulted in more positives, while the positivity rate is unchanged. Doubling the tests has doubled the positives, up from 300 a day to nearly 600 a day presently.  Note these are PCR results. And the discussion above suggests that the number of persons with an active infectious viral load is likely 10% of those reported positive: IOW up from 30 a day to 60 a day.  And in the graph below, the total of actual cases in Canada is likely on the order of 13,000 total from the last 7 months, an average of 62 cases a day.

WuFlu Exposes a Fundamental Flaw in US Health System

Dr. Mina goes on to explain what went wrong in US response to WuFlu:

In the U.S, we have a major focus on clinical medicine, and we have undervalued and underfunded the whole concept of public health for a very long time. We saw an example of this for, for example, when we tried to get the state laboratories across the country to be able to perform the PCR tests back in March, February and March, we very quickly realized that our public health infrastructure in this country just wasn’t up to the task. We had very few labs that were really able to do enough testing to just meet the clinical demands. And so such a reduced focus on public health for so long has led to an ecosystem where our regulatory agencies, this being primarily the FDA, has a mandate to approve clinical medical diagnostic tests. But there’s actually no regulatory pathway that is available or exists — and in many ways, we don’t even have a language for it — for a test whose primary purpose is one of public health and not personal medical health

That’s really caused a problem. And a lot of times, it’s interesting if you think about the United States, every single test that we get, with the exception maybe of a pregnancy test, has to go through a physician. And so that’s a symptom of a country that has focused, and a society really, that has focused so heavily on the medical industrial complex. And I’m part of that as a physician. But I also am part of the public health complex as an epidemiologist. And I see that sometimes these are at odds with each other, medicine and public health. And this is an example where because all of our regulatory infrastructure is so focused on medical devices… If you’re a public health person, you can actually have a huge amount of leeway in how your tests are working and still be able to get epidemics under control. And so there’s a real tension here between the regulations that would be required for these types of tests versus a medical diagnostic test.

Footnote:  I don’t think the Chinese leaders were focusing on the systemic weakness Dr. MIna mentions.  But you do have to bow to the inscrutable cleverness of the Chinese Communists releasing WuFlu as a means to set internal turmoil within democratic capitalist societies.  On one side are profit-seeking Big Pharma, aided and abetted by Big Media using fear to attract audiences for advertising revenues.  The panicked public demands protection which clueless government provides by shutting down the service and manufacturing industries, as well as throwing money around and taking on enormous debt.  The world just became China’s oyster.

Background from Previous Post: Covid Burnout in Canada August 28

The map shows that in Canada 9108 deaths have been attributed to Covid19, meaning people who died having tested positive for SARS CV2 virus.  This number accumulated over a period of 210 days starting January 31. The daily death rate reached a peak of 177 on May 6, 2020, and is down to 6 as of yesterday.  More details on this below, but first the summary picture. (Note: 2019 is the latest demographic report)

  Canada Pop Ann Deaths Daily Deaths Risk per
Person
2019 37589262 330786 906 0.8800%
Covid 2020 37589262 9108 43 0.0242%

Over the epidemic months, the average Covid daily death rate amounted to 5% of the All Causes death rate. During this time a Canadian had an average risk of 1 in 5000 of dying with SARS CV2 versus a 1 in 114 chance of dying regardless of that infection. As shown later below the risk varied greatly with age, much lower for younger, healthier people.

Background Updated from Previous Post

In reporting on Covid19 pandemic, governments have provided information intended to frighten the public into compliance with orders constraining freedom of movement and activity. For example, the above map of the Canadian experience is all cumulative, and the curve will continue upward as long as cases can be found and deaths attributed.  As shown below, we can work around this myopia by calculating the daily differentials, and then averaging newly reported cases and deaths by seven days to smooth out lumps in the data processing by institutions.

A second major deficiency is lack of reporting of recoveries, including people infected and not requiring hospitalization or, in many cases, without professional diagnosis or treatment. The only recoveries presently to be found are limited statistics on patients released from hospital. The only way to get at the scale of recoveries is to subtract deaths from cases, considering survivors to be in recovery or cured. Comparing such numbers involves the delay between infection, symptoms and death. Herein lies another issue of terminology: a positive test for the SARS CV2 virus is reported as a case of the disease COVID19. In fact, an unknown number of people have been infected without symptoms, and many with very mild discomfort.

August 7 in the UK it was reported (here) that around 10% of coronavirus deaths recorded in England – almost 4,200 – could be wiped from official records due to an error in counting.  Last month, Health Secretary Matt Hancock ordered a review into the way the daily death count was calculated in England citing a possible ‘statistical flaw’.  Academics found that Public Health England’s statistics included everyone who had died after testing positive – even if the death occurred naturally or in a freak accident, and after the person had recovered from the virus.  Numbers will now be reconfigured, counting deaths if a person died within 28 days of testing positive much like Scotland and Northern Ireland…

Professor Heneghan, director of the Centre for Evidence-Based Medicine at Oxford University, who first noticed the error, told the Sun:

‘It is a sensible decision. There is no point attributing deaths to Covid-19 28 days after infection…

For this discussion let’s assume that anyone reported as dying from COVD19 tested positive for the virus at some point prior. From the reasoning above let us assume that 28 days after testing positive for the virus, survivors can be considered recoveries.

Recoveries are calculated as cases minus deaths with a lag of 28 days. Daily cases and deaths are averages of the seven days ending on the stated date. Recoveries are # of cases from 28 days earlier minus # of daily deaths on the stated date. Since both testing and reports of Covid deaths were sketchy in the beginning, this graph begins with daily deaths as of April 24, 2020 compared to cases reported on March 27, 2020.

The line shows the Positivity metric for Canada starting at nearly 8% for new cases April 24, 2020. That is, for the 7 day period ending April 24, there were a daily average of 21,772 tests and 1715 new cases reported. Since then the rate of new cases has dropped down, now holding steady at ~1% since mid-June. Yesterday, the daily average number of tests was 45,897 with 427 new cases. So despite more than doubling the testing, the positivity rate is not climbing.  Another view of the data is shown below.

The scale of testing has increased and now averages over 45,000 a day, while positive tests (cases) are hovering at 1% positivity.  The shape of the recovery curve resembles the case curve lagged by 28 days, since death rates are a small portion of cases.  The recovery rate has grown from 83% to 99% steady over the last 2 weeks, so that recoveries exceed new positives. This approximation surely understates the number of those infected with SAR CV2 who are healthy afterwards, since antibody studies show infection rates multiples higher than confirmed positive tests (8 times higher in Canada).  In absolute terms, cases are now down to 427 a day and deaths 6 a day, while estimates of recoveries are 437 a day.

The key numbers: 

99% of those tested are not infected with SARS CV2. 

99% of those who are infected recover without dying.

Summary of Canada Covid Epidemic

It took a lot of work, but I was able to produce something akin to the Dutch advice to their citizens.

The media and governmental reports focus on total accumulated numbers which are big enough to scare people to do as they are told.  In the absence of contextual comparisons, citizens have difficulty answering the main (perhaps only) question on their minds:  What are my chances of catching Covid19 and dying from it?

A previous post reported that the Netherlands parliament was provided with the type of guidance everyone wants to see.

For canadians, the most similar analysis is this one from the Daily Epidemiology Update: :

The table presents only those cases with a full clinical documentation, which included some 2194 deaths compared to the 5842 total reported.  The numbers show that under 60 years old, few adults and almost no children have anything to fear.

Update May 20, 2020

It is really quite difficult to find cases and deaths broken down by age groups.  For Canadian national statistics, I resorted to a report from Ontario to get the age distributions, since that province provides 69% of the cases outside of Quebec and 87% of the deaths.  Applying those proportions across Canada results in this table. For Canada as a whole nation:

Age  Risk of Test +  Risk of Death Population
per 1 CV death
<20 0.05% None NA
20-39 0.20% 0.000% 431817
40-59 0.25% 0.002% 42273
60-79 0.20% 0.020% 4984
80+ 0.76% 0.251% 398

In the worst case, if you are a Canadian aged more than 80 years, you have a 1 in 400 chance of dying from Covid19.  If you are 60 to 80 years old, your odds are 1 in 5000.  Younger than that, it’s only slightly higher than winning (or in this case, losing the lottery).

As noted above Quebec provides the bulk of cases and deaths in Canada, and also reports age distribution more precisely,  The numbers in the table below show risks for Quebecers.

Age  Risk of Test +  Risk of Death Population
per 1 CV death
0-9 yrs 0.13% 0 NA
10-19 yrs 0.21% 0 NA
20-29 yrs 0.50% 0.000% 289,647
30-39 0.51% 0.001% 152,009
40-49 years 0.63% 0.001% 73,342
50-59 years 0.53% 0.005% 21,087
60-69 years 0.37% 0.021% 4,778
70-79 years 0.52% 0.094% 1,069
80-89 1.78% 0.469% 213
90  + 5.19% 1.608% 62

While some of the risk factors are higher in the viral hotspot of Quebec, it is still the case that under 80 years of age, your chances of dying from Covid 19 are better than 1 in 1000, and much better the younger you are.

Sturgis Bikers Not Superspreaders

Jennifer Beam Dowd writes at Slate The Sturgis Biker Rally Did Not Cause 266,796 Cases of COVID-19.  Excerpts in italics with my bolds.

The recent mass gathering in South Dakota for the annual Sturgis Motorcycle Rally seemed like the perfect recipe for what epidemiologists call a “superspreading” event. Beginning Aug. 7, an estimated 460,000 attendees from all over the country descended on the small town of Sturgis for a 10-day event filled with indoor and outdoor events such as concerts and drag racing.

Now a new working paper by economist Dhaval Dave and colleagues is making headlines with their estimate that the Sturgis rally led to a shocking 266,796 new cases in the U.S. over a four-week period, which would account for a staggering 19 percent of newly confirmed cases in the U.S. in that time. They estimate the economic cost of these cases at $12.2 billion, based on previous estimates of the statistical cost of treating a COVID-19 patient.

Modeling infection transmission dynamics is hard, as we have seen by the less than stellar performance of many predictive COVID-19 models thus far. (Remember back in April, when the IHME model from the University of Washington predicted zero U.S. deaths in July?) Pandemic spread is difficult both to predict and to explain after the fact—like trying to explain the direction and intensity of current wildfires in the West. While some underlying factors do predict spread, there is a high degree of randomness, and small disturbances (like winds) can cause huge variation across time and space. Many outcomes that social scientists typically study, like income, are more stable and not as susceptible to these “butterfly effects” that threaten the validity of certain research designs.

While this approach may sound sensible, it relies on strong assumptions that rarely hold in the real world. For one thing, there are many other differences between counties full of bike rally fans versus those with none, and therein lies the challenge of creating a good “counterfactual” for the implied experiment—how to compare trends in counties that are different on many geographic, social, and economic dimensions? The “parallel trends” assumption assumes that every county was on a similar trajectory and the only difference was the number of attendees sent to the Sturgis rally. When this “parallel trends” assumption is violated, the resulting estimates are not just off by a little—they can be completely wrong.

This type of modeling is risky, and the burden of proof for the believability of the assumptions very high.

If thinking through the required transmission dynamics doesn’t raise your alarm bells, consider this: The paper’s results show that the significant increase in transmission was only evident after Aug. 26. That makes sense—it would be consistent with a lag time for infections from the beginning of the rally. Nonetheless, the authors state that their estimate of the total number of cases, 266,796, represents “19 percent of the 1.4 million cases of COVID-19 in the United States between August 2nd 2020 and September 2nd.” (Italics mine.) In reality, these extra cases must have occurred in the second half of the month, meaning these estimates would account for a staggering 45 percent of U.S. cases over those two weeks. This simply doesn’t seem plausible.

The 266,796 number also overstates the precision of the estimates in the paper even if the model is taken at face value. The confidence intervals for the “high inflow” counties seem to include zero (meaning the authors can’t say with statistical confidence that there was any difference in infections across counties due to the rally). No standard errors (measures of the variability around the estimate) are provided for the main regression results, and many of the p-values for key results are not statistically significant at conventional levels. So even if one believes the design and assumptions, the results are very “noisy” and subject to caveats that don’t merit the broadcasting of the highly specific 266,796 figure with confidence, though I imagine that “somewhere between zero and 450,000 infections” would not have been as headline-grabbing.

The paper also estimates the rise in cases in Meade County, South Dakota, the site of the rally, and reports an increase of between 177 and 195 cases compared with a “synthetic control” of similar counties, an approach similar in spirit to the difference-in-difference model. This represents a 100 to 200 percent increase in cases, which also appears to be a serious overestimate. Looking at the raw case data for Meade Country, cumulative cases from Aug. 3 to Sept. 2 increased from by 45 to 74, an increase of only total 29 cases (though a 64 percent increase). With a cumulative case count of only 74 in Meade County by Sept. 2, an estimated increase of 103 more than the total observed over the whole pandemic suggests serious problems with the model.

Again, the authors employ a method that implicitly compares what happened in Meade County to similar hypothetical “twin” counties. Counties from within South Dakota and bordering states were excluded since they may also have been directly affected by the rally. Counties that shared similar urbanicity rates, population density, and pre-rally COVID-19 cases per capita were considered good candidates for this counterfactual group. Finding valid comparisons is key. Upon inspection, one of the counties weighted heavily as a “control” was in Hawaii—I think we can agree that islands during a pandemic are not likely a good control group for what is happening in the lower 48.

None of this means that the rally was probably harmless. Common sense would tell us that such a large event with close contact was risky and did increase transmission. The rise in Meade County was real and noticeable, albeit on the scale of 29 cases. Given the huge inflow to this specific location along with increased testing for the event, a bump was not surprising.

Contact tracing reports have identified cases and deaths linked to the event, but in the range of hundreds.

More broadly, while it’s important for us to understand factors driving COVID-19 transmission, the methodological challenges to identifying these effects at the aggregate level are difficult to overcome. Improved contact tracing and surveys at the individual level are the best way to gain insights into transmission dynamics. (At Dear Pandemic, a COVID-19 science communication effort I run with colleagues, we unfortunately spend much of our time explaining and correcting such misleading statistics.) The authors of this study have used the same study design to estimate the effects of other mass gatherings including the BLM protests and Trump’s June Tulsa, Oklahoma, rally. Each paper has given some part of the political spectrum something they might want to hear but has done very little to illuminate the actual risks of COVID-19 transmission at these events.

Exaggerated headlines and cherry-picking of results for “I told you so” media moments can dangerously undermine the long-term integrity of the science—something we can little afford right now.