Why the Left Coast is Still Burning

Update September 13, 2020:  This reprint of a post two years ago shows nothing has changed, except for the worse.

It is often said that truth is the first casualty in the fog of war. That is especially true of the war against fossil fuels and smoke from wildfires. The forests are burning in California, Oregon and Washington, all of them steeped in liberal, progressive and post-modern ideology. There are human reasons that fires are out of control in those places, and it is not due to CO2 emissions. As we shall see, Zinke is right and Brown is wrong. Some truths the media are not telling you in their drive to blame global warming/climate change. Text below is excerpted from sources linked at the end.

1. The World and the US are not burning.

The geographic extent of this summer’s forest fires won’t come close to the aggregate record for the U.S. Far from it. Yes, there are some terrible fires now burning in California, Oregon, and elsewhere, and the total burnt area this summer in the U.S. is likely to exceed the 2017 total. But as the chart above shows, the burnt area in 2017 was less than 20% of the record set way back in 1930. The same is true of the global burnt area, which has declined over many decades.

In fact, this 2006 paper reported the following:

“Analysis of charcoal records in sediments [31] and isotope-ratio records in ice cores [32] suggest that global biomass burning during the past century has been lower than at any time in the past 2000 years. Although the magnitude of the actual differences between pre-industrial and current biomass burning rates may not be as pronounced as suggested by those studies [33], modelling approaches agree with a general decrease of global fire activity at least in past centuries [34]. In spite of this, fire is often quoted as an increasing issue around the globe [11,26–29].”

People have a tendency to exaggerate the significance of current events. Perhaps the youthful can be forgiven for thinking hot summers are a new phenomenon. Incredibly, more “seasoned” folks are often subject to the same fallacies. The fires in California have so impressed climate alarmists that many of them truly believe global warming is the cause of forest fires in recent years, including the confused bureaucrats at Cal Fire, the state’s firefighting agency. Of course, the fires have given fresh fuel to self-interested climate activists and pressure groups, an opportunity for greater exaggeration of an ongoing scare story.

This year, however, and not for the first time, a high-pressure system has been parked over the West, bringing southern winds up the coast along with warmer waters from the south, keeping things warm and dry inland. It’s just weather, though a few arsonists and careless individuals always seem to contribute to the conflagrations. Beyond all that, the impact of a warmer climate on the tendency for biomass to burn is considered ambiguous for realistic climate scenarios.

2. Public forests are no longer managed due to litigation.

According to a 2014 white paper titled; ‘Twenty Years of Forest Service Land Management Litigation’, by Amanda M.A. Miner, Robert W. Malmsheimer, and Denise M. Keele: “This study provides a comprehensive analysis of USDA Forest Service litigation from 1989 to 2008. Using a census and improved analyses, we document the final outcome of the 1125 land management cases filed in federal court. The Forest Service won 53.8% of these cases, lost 23.3%, and settled 22.9%. It won 64.0% of the 669 cases decided by a judge based on cases’ merits. The agency was more likely to lose and settle cases during the last six years; the number of cases initiated during this time varied greatly. The Pacific Northwest region along with the Ninth Circuit Court of Appeals had the most frequent occurrence of cases. Litigants generally challenged vegetative management (e.g. logging) projects, most often by alleging violations of the National Environmental Policy Act and the National Forest Management Act. The results document the continued influence of the legal system on national forest management and describe the complexity of this litigation.”

There is abundant evidence to support the position that when any forest project posits vegetative management in forests as a pretense for a logging operation, salvage or otherwise, litigation is likely to ensue, and in addition to NEPA, the USFS uses the Property Clause to address any potential removal of ‘forest products’. Nevertheless, the USFS currently spends more than 50% of its total budget on wildfire suppression alone; about $1.8 billion annually, while there is scant spending for wildfire prevention.

3. Mega fires are the unnatural result of fire suppression.

And what of the “mega-fires” burning in the West, like the huge Mendocino Complex Fire and last year’s Thomas Fire? Unfortunately, many decades of fire suppression measures — prohibitions on logging, grazing, and controlled burns — have left the forests with too much dead wood and debris, especially on public lands. From the last link:

“Oregon, like much of the western U.S., was ravaged by massive wildfires in the 1930s during the Dust Bowl drought. Megafires were largely contained due to logging and policies to actively manage forests, but there’s been an increasing trend since the 1980s of larger fires.

Active management of the forests and logging kept fires at bay for decades, but that largely ended in the 1980s over concerns too many old growth trees and the northern spotted owl. Lawsuits from environmental groups hamstrung logging and government planners cut back on thinning trees and road maintenance.

[Bob] Zybach [a forester] said Native Americans used controlled burns to manage the landscape in Oregon, Washington and northern California for thousands of years. Tribes would burn up to 1 million acres a year on the west coast to prime the land for hunting and grazing, Zybach’s research has shown.

‘The Indians had lots of big fires, but they were controlled,’ Zybach said. ‘It’s the lack of Indian burning, the lack of grazing’ and other active management techniques that caused fires to become more destructive in the 19th and early 20th centuries before logging operations and forest management techniques got fires under control in the mid-20th Century.”

4. Bad federal forest administration started in 1990s.

Bob Zybach feels like a broken record. Decades ago he warned government officials allowing Oregon’s forests to grow unchecked by proper management would result in catastrophic wildfires.

While some want to blame global warming for the uptick in catastrophic wildfires, Zybach said a change in forest management policies is the main reason Americans are seeing a return to more intense fires, particularly in the Pacific Northwest and California where millions of acres of protected forests stand.

“We knew exactly what would happen if we just walked away,” Zybach, an experienced forester with a PhD in environmental science, told The Daily Caller News Foundation.

Zybach spent two decades as a reforestation contractor before heading to graduate school in the 1990s. Then the Clinton administration in 1994 introduced its plan to protect old growth trees and spotted owls by strictly limiting logging.  Less logging also meant government foresters weren’t doing as much active management of forests — thinnings, prescribed burns and other activities to reduce wildfire risk.

Zybach told Evergreen magazine that year the Clinton administration’s plan for “naturally functioning ecosystems” free of human interference ignored history and would fuel “wildfires reminiscent of the Tillamook burn, the 1910 fires and the Yellowstone fire.”

Between 1952 and 1987, western Oregon saw only one major fire above 10,000 acres. The region’s relatively fire-free streak ended with the Silver Complex Fire of 1987 that burned more than 100,000 acres in the Kalmiopsis Wilderness area, torching rare plants and trees the federal government set aside to protect from human activities. The area has burned several more times since the 1980s.

“Mostly fuels were removed through logging, active management — which they stopped — and grazing,” Zybach said in an interview. “You take away logging, grazing and maintenance, and you get firebombs.”

Now, Oregonians are dealing with 13 wildfires engulfing 185,000 acres. California is battling nine fires scorching more than 577,000 acres, mostly in the northern forested parts of the state managed by federal agencies.

The Mendocino Complex Fire quickly spread to become the largest wildfire in California since the 1930s, engulfing more than 283,000 acres. The previous wildfire record was set by 2017’s Thomas Fire that scorched 281,893 acres in Southern California.

While bad fires still happen on state and private lands, most of the massive blazes happen on or around lands managed by the U.S. Forest Service and other federal agencies, Zybach said. Poor management has turned western forests into “slow-motion time bombs,” he said.

A feller buncher removing small trees that act as fuel ladders and transmit fire into the forest canopy.

5. True environmentalism is not nature love, but nature management.

While wildfires do happen across the country, poor management by western states has served to turn entire regions into tinderboxes. By letting nature play out its course so close to civilization, this is the course California and Oregon have taken.

Many in heartland America and along the Eastern Seaboard often see logging and firelines if they travel to a rural area. This is part and parcel of life outside of the city, where everyone knows that because of a few minor eyesores their houses and communities are safer from the primal fury of wildfires.

In other words, leaving the forests to “nature,” and protecting the endangered Spotted Owl created denser forests––300-400 trees per acre rather than 50-80–– with more fuel from the 129 million diseased and dead trees that create more intense and destructive fires. Yet California spends more than ten times as much money on electric vehicle subsidies ($335 million) than on reducing fuel in a mere 60,000 of 33 million acres of forests ($30 million).

Rancher Ross Frank worries that funding to fight fires in Western communities like Chumstick, Wash., has crowded out important land management work. Rowan Moore Gerety/Northwest Public Radio

Once again, global warming “science” is a camouflage for political ideology and gratifying myths about nature and human interactions with it. On the one hand, progressives seek “crises” that justify more government regulation and intrusion that limit citizen autonomy and increase government power. On the other, well-nourished moderns protected by technology from nature’s cruel indifference to all life can afford to indulge myths that give them psychic gratification at little cost to their daily lives.

As usual, bad cultural ideas lie behind these policies and attitudes. Most important is the modern fantasy that before civilization human beings lived in harmony and balance with nature. The rise of cities and agriculture began the rupture with the environment, “disenchanting” nature and reducing it to mere resources to be exploited for profit. In the early 19thcentury, the growth of science that led to the industrial revolution inspired the Romantic movement to contrast industrialism’s “Satanic mills” and the “shades of the prison-house,” with a superior natural world and its “beauteous forms.” In an increasingly secular age, nature now became the Garden of Eden, and technology and science the signs of the fall that has banished us from the paradise enjoyed by humanity before civilization.

The untouched nature glorified by romantic environmentalism, then, is not our home. Ever since the cave men, humans have altered nature to make it more conducive to human survival and flourishing. After the retreat of the ice sheets changed the environment and animal species on which people had depended for food, humans in at least four different regions of the world independently invented agriculture to better manage the food supply. Nor did the American Indians, for example, live “lightly on the land” in a pristine “forest primeval.” They used fire to shape their environment for their own benefit. They burned forests to clear land for cultivation, to create pathways to control the migration of bison and other game, and to promote the growth of trees more useful for them.

Remaining trees and vegetation on the forest floor are more vigorous after removal of small trees for fuels reduction.

And today we continue to improve cultivation techniques and foods to make them more reliable, abundant, and nutritious, not to mention more various and safe. We have been so successful at managing our food supply that today one person out of ten provides food that used to require nine out of ten, obesity has become the plague of poverty, and famines result from political dysfunction rather than nature.

That’s why untouched nature, the wild forests filled with predators, has not been our home. The cultivated nature improved by our creative minds has. True environmentalism is not nature love, but nature management: applying skill and technique to make nature more useful for humans, at the same time conserving resources so that those who come after us will be able to survive. Managing resources and exploiting them for our benefit without destroying them is how we should approach the natural world. We should not squander resources or degrade them, not because of nature, but because when we do so, we are endangering the well-being of ourselves and future generations.

Conclusion

The annual burnt area from wildfires has declined over the past ninety years both in the U.S. and globally. Even this year’s wildfires are unlikely to come close to the average burn extent of the 1930s. The large wildfires this year are due to a combination of decades of poor forest management along with a weather pattern that has trapped warm, dry air over the West. The contention that global warming has played a causal role in the pattern is balderdash, but apparently that explanation seems plausible to the uninformed, and it is typical of the propaganda put forward by climate change interests.

Sources: 

https://www.frontpagemag.com/fpm/271044/junk-science-and-leftist-folklore-have-set-bruce-thornton

https://www.4baseball.com/westernjournal.com/after-libs-blame-west-coast-fires-on-global-warming-forester-speaks-out/

https://sacredcowchips.net/tag/bob-zybach/

https://www.horsetalk.co.nz/2017/10/13/ecological-imbalance-wildfires-us-rangelands/

http://dailycaller.com/2018/08/08/mismanagement-forests-time-bombs/

Footnote:  So how do you want your forest fires, some small ones now or mega fires later?

New Better and Faster CV Tests

Kevin Pham reports on a breakthrough in coronavirus testing. Excerpts in italics with my bolds.

Another new test for COVID-19 was recently authorized — and this one could be a game-changer.

The Abbot Diagnostics BinaxNOW antigen test is a new point-of-care test that reportedly costs only $5 to administer, delivers results in as little as 15 minutes, and requires no laboratory equipment to perform. That means it can be used in clinics far from commercial labs or without relying on a nearby hospital lab.

That last factor is key. There are other quick COVID-19 tests on the market, but they have all required lab equipment that can be expensive to maintain and operate, and costs can be prohibitive in places that need tests most.

This kind of test is reminiscent of rapid flu tests that are ubiquitous in clinics. They’ll give providers tremendous flexibility in testing for the disease in not just clinics, but with trained and licensed medical professionals, in schools, workplaces, camps, or any other number of places.

So what’s new about this test? Most of the current tests detect viral RNA, the genetic material of SARS-CoV-2. This is a very accurate way of detecting the virus, but it requires lab equipment to break apart the virus and amplify the amount of genetic material to high enough levels for detection.

The BinaxNOW test detects antigens — proteins unique to the virus that are usually detectable whenever there is an active infection.

Abbott says it intends to produce 50 million tests per month starting in October. That’s far more than the number tested in July, when we were breaking new testing records on a daily basis with approximately 23 million tests recorded.

There’s a more important reason to be encouraged by this test coming available.  The viral load is not amplified by the test, so a positive is actually a person needing isolation and treatment.  As explained in a previous post below,  the PCR tests used up to now clutter up the record by showing as positive people with viral loads too low to be sick or to infect others.

Background from Previous Post The Truth About CV Tests

The peoples’ instincts are right, though they have been kept in the dark about this “pandemic” that isn’t.  Responsible citizens are starting to act out their outrage from being victimized by a medical-industrial complex (to update Eisenhower’s warning decades ago).  The truth is, governments are not justified to take away inalienable rights to life, liberty and the pursuit of happiness.  There are several layers of disinformation involved in scaring the public.  This post digs into the CV tests, and why the results don’t mean what the media and officials claim.

For months now, I have been updating the progress in Canada of the CV outbreak.  A previous post later on goes into the details of extracting data on tests, persons testing positive (termed “cases” without regard for illness symptoms) and deaths after testing positive.  Currently, the contagion looks like this.

The graph shows that deaths are less than 5 a day, compared to a daily death rate of 906 in Canada from all causes.  Also significant is the positivity ratio:  the % of persons testing positive out of all persons tested each day.  That % has been fairly steady for months now:  1% positive means 99% of people are not infected. And this is despite more than doubling the rate of testing.

But what does testing positive actually mean?  Herein lies more truth that has been hidden from the public for the sake of an agenda to control free movement and activity.  Background context comes from  Could Rapid Coronavirus Testing Help Life Return To Normal?, an interview at On Point with Dr. Michael Mina.  Excerpts in italics with my bolds. H/T Kip Hansen

A sign displays a new rapid coronavirus test on the new Abbott ID Now machine at a ProHEALTH center in Brooklyn on August 27, 2020 in New York City. (Spencer Platt/Getty Images)

Dr. Michael Mina:

COVID tests can actually be put onto a piece of paper, very much like a pregnancy test. In fact, it’s almost exactly like a pregnancy test. But instead of looking for the hormones that tell if somebody is pregnant, it looks for the virus proteins that are part of SA’s code to virus. And it would be very simple: You’d either swab the front of your nose or you’d take some saliva from under your tongue, for example, and put it onto one of these paper strips, essentially. And if you see a line, it means you’re positive. And if you see no line, it means you are negative, at least for having a high viral load that could be transmissible to other people.

An antigen is one of the proteins in the virus. And so unlike the PCR test, which is what most people who have received a test today have generally received a PCR test. And looking those types of tests look for the genome of the virus to RNA and you could think of RNA the same way that humans have DNA. This virus has RNA. But instead of looking for RNA like the PCR test, these antigen tests look for pieces of the protein. It would be like if I wanted a test to tell me, you know, that somebody was an individual, it would actually look for features like their eyes or their nose. And in this case, it is looking for different parts of the virus. In general, the spike protein or the nuclear capsid, these are two parts of the virus.

The reason that these antigen tests are going to be a little bit less sensitive to detect the virus molecules is because there’s no step that we call an amplification step. One of the things that makes the PCR test that looks for the virus RNA so powerful is that it can take just one molecule, which the sensor on the machine might not be able to detect readily, but then it amplifies that molecule millions and millions of times so that the sensor can see it. These antigen tests, because they’re so simple and so easy to use and just happen on a piece of paper, they don’t have that amplification step right now. And so they require a larger amount of virus in order to be able to detect it. And that’s why I like to think of these types of tests having their primary advantage to detect people with enough virus that they might be transmitting or transmissible to other people.”

The PCR test, provides a simple yes/no answer to the question of whether a patient is infected.
Source: Covid Confusion On PCR Testing: Maybe Most Of Those Positives Are Negatives.

Similar PCR tests for other viruses nearly always offer some measure of the amount of virus. But yes/no isn’t good enough, Mina added. “It’s the amount of virus that should dictate the infected patient’s next steps. “It’s really irresponsible, I think, to [ignore this]” Dr. Mina said, of how contagious an infected patient may be.

We’ve been using one type of data for everything,” Mina said. “for [diagnosing patients], for public health, and for policy decision-making.”

The PCR test amplifies genetic matter from the virus in cycles; the fewer cycles required, the greater the amount of virus, or viral load, in the sample. The greater the viral load, the more likely the patient is to be contagious.

This number of amplification cycles needed to find the virus, called the cycle threshold, is never included in the results sent to doctors and coronavirus patients, although if it was, it could give them an idea of how infectious the patients are.

One solution would be to adjust the cycle threshold used now to decide that a patient is infected. Most tests set the limit at 40, a few at 37. This means that you are positive for the coronavirus if the test process required up to 40 cycles, or 37, to detect the virus.

Any test with a cycle threshold above 35 is too sensitive, Juliet Morrison, a virologist at the University of California, Riverside told the New York Times. “I’m shocked that people would think that 40 could represent a positive,” she said.

A more reasonable cutoff would be 30 to 35, she added. Dr. Mina said he would set the figure at 30, or even less.

Another solution, researchers agree, is to use even more widespread use of Rapid Diagnostic Tests (RDTs) which are much less sensitive and more likely to identify only patients with high levels of virus who are a transmission risk.

Comment:  In other words, when they analyzed the tests that also reported cycle threshold (CT), they found that 85 to 90 percent were above 30. According to Dr. Mina a CT of 37 is 100 times too sensitive (7 cycles too much, 2^7 = 128) and a CT of 40 is 1,000 times too sensitive (10 cycles too much, 2^10 = 1024). Based on their sample of tests that also reported CT, as few as 10 percent of people with positive PCR tests actually have an active COVID-19 infection. Which is a lot less than reported.

Here is a graph showing how this applies to Canada.

It is evident that increased testing has resulted in more positives, while the positivity rate is unchanged. Doubling the tests has doubled the positives, up from 300 a day to nearly 600 a day presently.  Note these are PCR results. And the discussion above suggests that the number of persons with an active infectious viral load is likely 10% of those reported positive: IOW up from 30 a day to 60 a day.  And in the graph below, the total of actual cases in Canada is likely on the order of 13,000 total from the last 7 months, an average of 62 cases a day.

WuFlu Exposes a Fundamental Flaw in US Health System

Dr. Mina goes on to explain what went wrong in US response to WuFlu:

In the U.S, we have a major focus on clinical medicine, and we have undervalued and underfunded the whole concept of public health for a very long time. We saw an example of this for, for example, when we tried to get the state laboratories across the country to be able to perform the PCR tests back in March, February and March, we very quickly realized that our public health infrastructure in this country just wasn’t up to the task. We had very few labs that were really able to do enough testing to just meet the clinical demands. And so such a reduced focus on public health for so long has led to an ecosystem where our regulatory agencies, this being primarily the FDA, has a mandate to approve clinical medical diagnostic tests. But there’s actually no regulatory pathway that is available or exists — and in many ways, we don’t even have a language for it — for a test whose primary purpose is one of public health and not personal medical health

That’s really caused a problem. And a lot of times, it’s interesting if you think about the United States, every single test that we get, with the exception maybe of a pregnancy test, has to go through a physician. And so that’s a symptom of a country that has focused, and a society really, that has focused so heavily on the medical industrial complex. And I’m part of that as a physician. But I also am part of the public health complex as an epidemiologist. And I see that sometimes these are at odds with each other, medicine and public health. And this is an example where because all of our regulatory infrastructure is so focused on medical devices… If you’re a public health person, you can actually have a huge amount of leeway in how your tests are working and still be able to get epidemics under control. And so there’s a real tension here between the regulations that would be required for these types of tests versus a medical diagnostic test.

Footnote:  I don’t think the Chinese leaders were focusing on the systemic weakness Dr. MIna mentions.  But you do have to bow to the inscrutable cleverness of the Chinese Communists releasing WuFlu as a means to set internal turmoil within democratic capitalist societies.  On one side are profit-seeking Big Pharma, aided and abetted by Big Media using fear to attract audiences for advertising revenues.  The panicked public demands protection which clueless government provides by shutting down the service and manufacturing industries, as well as throwing money around and taking on enormous debt.  The world just became China’s oyster.

Background from Previous Post: Covid Burnout in Canada August 28

The map shows that in Canada 9108 deaths have been attributed to Covid19, meaning people who died having tested positive for SARS CV2 virus.  This number accumulated over a period of 210 days starting January 31. The daily death rate reached a peak of 177 on May 6, 2020, and is down to 6 as of yesterday.  More details on this below, but first the summary picture. (Note: 2019 is the latest demographic report)

  Canada Pop Ann Deaths Daily Deaths Risk per
Person
2019 37589262 330786 906 0.8800%
Covid 2020 37589262 9108 43 0.0242%

Over the epidemic months, the average Covid daily death rate amounted to 5% of the All Causes death rate. During this time a Canadian had an average risk of 1 in 5000 of dying with SARS CV2 versus a 1 in 114 chance of dying regardless of that infection. As shown later below the risk varied greatly with age, much lower for younger, healthier people.

Background Updated from Previous Post

In reporting on Covid19 pandemic, governments have provided information intended to frighten the public into compliance with orders constraining freedom of movement and activity. For example, the above map of the Canadian experience is all cumulative, and the curve will continue upward as long as cases can be found and deaths attributed.  As shown below, we can work around this myopia by calculating the daily differentials, and then averaging newly reported cases and deaths by seven days to smooth out lumps in the data processing by institutions.

A second major deficiency is lack of reporting of recoveries, including people infected and not requiring hospitalization or, in many cases, without professional diagnosis or treatment. The only recoveries presently to be found are limited statistics on patients released from hospital. The only way to get at the scale of recoveries is to subtract deaths from cases, considering survivors to be in recovery or cured. Comparing such numbers involves the delay between infection, symptoms and death. Herein lies another issue of terminology: a positive test for the SARS CV2 virus is reported as a case of the disease COVID19. In fact, an unknown number of people have been infected without symptoms, and many with very mild discomfort.

August 7 in the UK it was reported (here) that around 10% of coronavirus deaths recorded in England – almost 4,200 – could be wiped from official records due to an error in counting.  Last month, Health Secretary Matt Hancock ordered a review into the way the daily death count was calculated in England citing a possible ‘statistical flaw’.  Academics found that Public Health England’s statistics included everyone who had died after testing positive – even if the death occurred naturally or in a freak accident, and after the person had recovered from the virus.  Numbers will now be reconfigured, counting deaths if a person died within 28 days of testing positive much like Scotland and Northern Ireland…

Professor Heneghan, director of the Centre for Evidence-Based Medicine at Oxford University, who first noticed the error, told the Sun:

‘It is a sensible decision. There is no point attributing deaths to Covid-19 28 days after infection…

For this discussion let’s assume that anyone reported as dying from COVD19 tested positive for the virus at some point prior. From the reasoning above let us assume that 28 days after testing positive for the virus, survivors can be considered recoveries.

Recoveries are calculated as cases minus deaths with a lag of 28 days. Daily cases and deaths are averages of the seven days ending on the stated date. Recoveries are # of cases from 28 days earlier minus # of daily deaths on the stated date. Since both testing and reports of Covid deaths were sketchy in the beginning, this graph begins with daily deaths as of April 24, 2020 compared to cases reported on March 27, 2020.

The line shows the Positivity metric for Canada starting at nearly 8% for new cases April 24, 2020. That is, for the 7 day period ending April 24, there were a daily average of 21,772 tests and 1715 new cases reported. Since then the rate of new cases has dropped down, now holding steady at ~1% since mid-June. Yesterday, the daily average number of tests was 45,897 with 427 new cases. So despite more than doubling the testing, the positivity rate is not climbing.  Another view of the data is shown below.

The scale of testing has increased and now averages over 45,000 a day, while positive tests (cases) are hovering at 1% positivity.  The shape of the recovery curve resembles the case curve lagged by 28 days, since death rates are a small portion of cases.  The recovery rate has grown from 83% to 99% steady over the last 2 weeks, so that recoveries exceed new positives. This approximation surely understates the number of those infected with SAR CV2 who are healthy afterwards, since antibody studies show infection rates multiples higher than confirmed positive tests (8 times higher in Canada).  In absolute terms, cases are now down to 427 a day and deaths 6 a day, while estimates of recoveries are 437 a day.

The key numbers: 

99% of those tested are not infected with SARS CV2. 

99% of those who are infected recover without dying.

Summary of Canada Covid Epidemic

It took a lot of work, but I was able to produce something akin to the Dutch advice to their citizens.

The media and governmental reports focus on total accumulated numbers which are big enough to scare people to do as they are told.  In the absence of contextual comparisons, citizens have difficulty answering the main (perhaps only) question on their minds:  What are my chances of catching Covid19 and dying from it?

A previous post reported that the Netherlands parliament was provided with the type of guidance everyone wants to see.

For canadians, the most similar analysis is this one from the Daily Epidemiology Update: :

The table presents only those cases with a full clinical documentation, which included some 2194 deaths compared to the 5842 total reported.  The numbers show that under 60 years old, few adults and almost no children have anything to fear.

Update May 20, 2020

It is really quite difficult to find cases and deaths broken down by age groups.  For Canadian national statistics, I resorted to a report from Ontario to get the age distributions, since that province provides 69% of the cases outside of Quebec and 87% of the deaths.  Applying those proportions across Canada results in this table. For Canada as a whole nation:

Age  Risk of Test +  Risk of Death Population
per 1 CV death
<20 0.05% None NA
20-39 0.20% 0.000% 431817
40-59 0.25% 0.002% 42273
60-79 0.20% 0.020% 4984
80+ 0.76% 0.251% 398

In the worst case, if you are a Canadian aged more than 80 years, you have a 1 in 400 chance of dying from Covid19.  If you are 60 to 80 years old, your odds are 1 in 5000.  Younger than that, it’s only slightly higher than winning (or in this case, losing the lottery).

As noted above Quebec provides the bulk of cases and deaths in Canada, and also reports age distribution more precisely,  The numbers in the table below show risks for Quebecers.

Age  Risk of Test +  Risk of Death Population
per 1 CV death
0-9 yrs 0.13% 0 NA
10-19 yrs 0.21% 0 NA
20-29 yrs 0.50% 0.000% 289,647
30-39 0.51% 0.001% 152,009
40-49 years 0.63% 0.001% 73,342
50-59 years 0.53% 0.005% 21,087
60-69 years 0.37% 0.021% 4,778
70-79 years 0.52% 0.094% 1,069
80-89 1.78% 0.469% 213
90  + 5.19% 1.608% 62

While some of the risk factors are higher in the viral hotspot of Quebec, it is still the case that under 80 years of age, your chances of dying from Covid 19 are better than 1 in 1000, and much better the younger you are.

Sturgis Bikers Not Superspreaders

Jennifer Beam Dowd writes at Slate The Sturgis Biker Rally Did Not Cause 266,796 Cases of COVID-19.  Excerpts in italics with my bolds.

The recent mass gathering in South Dakota for the annual Sturgis Motorcycle Rally seemed like the perfect recipe for what epidemiologists call a “superspreading” event. Beginning Aug. 7, an estimated 460,000 attendees from all over the country descended on the small town of Sturgis for a 10-day event filled with indoor and outdoor events such as concerts and drag racing.

Now a new working paper by economist Dhaval Dave and colleagues is making headlines with their estimate that the Sturgis rally led to a shocking 266,796 new cases in the U.S. over a four-week period, which would account for a staggering 19 percent of newly confirmed cases in the U.S. in that time. They estimate the economic cost of these cases at $12.2 billion, based on previous estimates of the statistical cost of treating a COVID-19 patient.

Modeling infection transmission dynamics is hard, as we have seen by the less than stellar performance of many predictive COVID-19 models thus far. (Remember back in April, when the IHME model from the University of Washington predicted zero U.S. deaths in July?) Pandemic spread is difficult both to predict and to explain after the fact—like trying to explain the direction and intensity of current wildfires in the West. While some underlying factors do predict spread, there is a high degree of randomness, and small disturbances (like winds) can cause huge variation across time and space. Many outcomes that social scientists typically study, like income, are more stable and not as susceptible to these “butterfly effects” that threaten the validity of certain research designs.

While this approach may sound sensible, it relies on strong assumptions that rarely hold in the real world. For one thing, there are many other differences between counties full of bike rally fans versus those with none, and therein lies the challenge of creating a good “counterfactual” for the implied experiment—how to compare trends in counties that are different on many geographic, social, and economic dimensions? The “parallel trends” assumption assumes that every county was on a similar trajectory and the only difference was the number of attendees sent to the Sturgis rally. When this “parallel trends” assumption is violated, the resulting estimates are not just off by a little—they can be completely wrong.

This type of modeling is risky, and the burden of proof for the believability of the assumptions very high.

If thinking through the required transmission dynamics doesn’t raise your alarm bells, consider this: The paper’s results show that the significant increase in transmission was only evident after Aug. 26. That makes sense—it would be consistent with a lag time for infections from the beginning of the rally. Nonetheless, the authors state that their estimate of the total number of cases, 266,796, represents “19 percent of the 1.4 million cases of COVID-19 in the United States between August 2nd 2020 and September 2nd.” (Italics mine.) In reality, these extra cases must have occurred in the second half of the month, meaning these estimates would account for a staggering 45 percent of U.S. cases over those two weeks. This simply doesn’t seem plausible.

The 266,796 number also overstates the precision of the estimates in the paper even if the model is taken at face value. The confidence intervals for the “high inflow” counties seem to include zero (meaning the authors can’t say with statistical confidence that there was any difference in infections across counties due to the rally). No standard errors (measures of the variability around the estimate) are provided for the main regression results, and many of the p-values for key results are not statistically significant at conventional levels. So even if one believes the design and assumptions, the results are very “noisy” and subject to caveats that don’t merit the broadcasting of the highly specific 266,796 figure with confidence, though I imagine that “somewhere between zero and 450,000 infections” would not have been as headline-grabbing.

The paper also estimates the rise in cases in Meade County, South Dakota, the site of the rally, and reports an increase of between 177 and 195 cases compared with a “synthetic control” of similar counties, an approach similar in spirit to the difference-in-difference model. This represents a 100 to 200 percent increase in cases, which also appears to be a serious overestimate. Looking at the raw case data for Meade Country, cumulative cases from Aug. 3 to Sept. 2 increased from by 45 to 74, an increase of only total 29 cases (though a 64 percent increase). With a cumulative case count of only 74 in Meade County by Sept. 2, an estimated increase of 103 more than the total observed over the whole pandemic suggests serious problems with the model.

Again, the authors employ a method that implicitly compares what happened in Meade County to similar hypothetical “twin” counties. Counties from within South Dakota and bordering states were excluded since they may also have been directly affected by the rally. Counties that shared similar urbanicity rates, population density, and pre-rally COVID-19 cases per capita were considered good candidates for this counterfactual group. Finding valid comparisons is key. Upon inspection, one of the counties weighted heavily as a “control” was in Hawaii—I think we can agree that islands during a pandemic are not likely a good control group for what is happening in the lower 48.

None of this means that the rally was probably harmless. Common sense would tell us that such a large event with close contact was risky and did increase transmission. The rise in Meade County was real and noticeable, albeit on the scale of 29 cases. Given the huge inflow to this specific location along with increased testing for the event, a bump was not surprising.

Contact tracing reports have identified cases and deaths linked to the event, but in the range of hundreds.

More broadly, while it’s important for us to understand factors driving COVID-19 transmission, the methodological challenges to identifying these effects at the aggregate level are difficult to overcome. Improved contact tracing and surveys at the individual level are the best way to gain insights into transmission dynamics. (At Dear Pandemic, a COVID-19 science communication effort I run with colleagues, we unfortunately spend much of our time explaining and correcting such misleading statistics.) The authors of this study have used the same study design to estimate the effects of other mass gatherings including the BLM protests and Trump’s June Tulsa, Oklahoma, rally. Each paper has given some part of the political spectrum something they might want to hear but has done very little to illuminate the actual risks of COVID-19 transmission at these events.

Exaggerated headlines and cherry-picking of results for “I told you so” media moments can dangerously undermine the long-term integrity of the science—something we can little afford right now.

Potsdam Does a New Hockey Stick Trick

The paper is Setting the tree-ring record straight by Josef Ludescher, Armin Bunde, Ulf Büntgen & Hans Joachim Schellnhuber.  The title is extremely informative, since the trick is to flatten the tree-ring proxies, removing any warm periods to compare with the present.  Excerpts below with my bolds.

Abstract

Tree-ring chronologies are the main source for annually resolved and absolutely dated temperature reconstructions of the last millennia and thus for studying the intriguing problem of climate impacts. Here we focus on central Europe and compare the tree-ring based temperature reconstruction with reconstructions from harvest dates, long meteorological measurements, and historical model data. We find that all data are long-term persistent, but in the tree-ring based reconstruction the strength of the persistence quantified by the Hurst exponent is remarkably larger (h≅1.02) than in the other data (h= 0.52–0.69), indicating an unrealistic exaggeration of the historical temperature variations. We show how to correct the tree-ring based reconstruction by a mathematical transformation that adjusts the persistence and leads to reduced amplitudes of the warm and cold periods. The new transformed record agrees well with both the observational data and the harvest dates-based reconstructions and allows more realistic studies of climate impacts. It confirms that the present warming is unprecedented.

Discussion

Figure 1a shows the tree-ring based reconstruction (TRBR) of central European summer temperatures (Büntgen et al. 2011), together with its 30 year moving average that reveals the long-term temperature variations in the record. Particularly large temperature increases occurred between 1340 and 1410 and between 1820 and 1870 that even are comparable in amplitude with the recent warming trend since 1970, indicating that the recent (anthropogenic) warming may not be unprecedented.

Tree ring-based reconstruction of the central European temperatures in the last millennium. a The reconstructed June-August temperatures in units of the records standard deviation. The red line depicts the moving average over 30 years. b, c The DFA2 fluctuation functions F(s) and the WT2 fluctuation functions G(s), respectively, for the reconstructed data from a, for monthly observational data (Swiss temperatures from Berkeley Earth, station data from Prague) and the MPI-ESM-P-past1000 model output for central European summer temperatures, from top to bottom. For the TRBR and model data, the time scale s is in years, while for the two observational records, it is in months. Note that in the double logarithmic presentation, the asymptotic slopes (Hurst exponents h) for the reconstruction data (h≅1) and the observational and model data (h≅0.6) differ strongly

To correct the enhanced long-term persistence in the TRBR, we are interested in a mathematical transformation of the data, which lowers the natural long-term persistence while leaving the gross features of the record, the positions of the warm and cold periods, unchanged. We performed the following mathematical transformation to change the original TRBR Hurst exponent h0=1.03 to h1=0.60 and thus to be in line with the observational, harvest and model data. Since this transformation is only suitable for altering a record’s natural long-term persistence, i.e., in the absence of external trends, we transformed the TRBR data between 1000 and 1990, before the current anthropogenic trend became relevant.

Figure 4a compares the transformed TRBR data (blue) with h1=0.6 with the original TRBR data (black). The bold lines are the 30-year moving averages. The figure shows that by the transformation the structure of the original TRBR data is conserved, but the climate variations characterized by the depths of the minima and the heights of the maxima are reduced.

Original and transformed tree-ring proxy temperature record. a Compares the original TRBR record for the period 1000–1990, where the Hurst exponent h is 1.03 (black), with the transformed TRBR record, where h≡h1=0.6 (blue). For better visibility, the transformed TRBR record has been shifted downward by 5 units of its standard deviation. b How the magnitudes of the cold periods in the transformed TRBR record decrease with decreasing Hurst exponent h1. The magnitudes are quantified by the differences of the 30 year moving averages between the beginning and the end of the respective periods. c Compares the 30-year moving averages of the original and the transformed TRBR record (h=0.6) with the 30-year moving average of the observational temperatures from Switzerland. The comparison shows that the transformed TRBR record fits quite nicely with the observational data

To see how the strength of the long-term variations in the transformed TRBR data depends on their Hurst exponent h1h1, we have determined, in the 30-year moving average, the temperature differences in 4 periods (1415–1465, 1515–1536, 1562–1595, 1793–1824) where the greatest changes between 1350 and 1950 occur. The result is shown in Fig. 4b. The figure shows that the temperature difference between the beginning and the end of each period decreases continuously with decreasing h. For h around 0.6, the temperature differences are roughly halved.

Conclusion

Since tree ring-based reconstructions play an important role in the understanding of past temperature variability, we suggest the use of the Hurst exponent as a standard practice to assess the reconstructions’ low-frequency properties and to compare the determined values with the Hurst exponents of other respective time series (observational, harvest dates, models). If deviations from the expected values are detected, the data should be transformed to adjust the Hurst exponent. This will lead to a more realistic reconstruction of the record’s low-frequency signal and thus to a better understanding of the climate variations of the past.

My Comment

Wow!  Just Wow!  The Mann-made Hockey Stick was found bogus because it was produced by grafting a high-resolution instrumental temperature record on top of a low-resolution tree ring proxy record.  Now climatists want to erase four bumps in the Medieval period lest they appear comparable to contemporary temperatures sampled minute by minute.  A simple tweaking of a formula achieves the desired result.  Fluctuations which were decadal are now smoothed and cannot compete with modern annual and monthly extremes.  Well done! (extreme snark on)

Background:  See Return of the Hockey Stick

COVID-19 is a lack of nutrients, exploited by a virus

Colleen Huber, NMD explains at The Primary Doctor COVID-19 is a lack of nutrients, exploited by a virus. Excerpts in italics with my bolds.

“There already exist numerous ways to reliably prevent, mitigate, and even cure COVID-19, including in late-stage patients who are already ventilator-dependent.”
– Thomas Levy, MD JD

Abstract

COVID-19 disease is alleged to be caused by the RNA coronavirus SARS-CoV-2. However, clinical findings from around the world show a sharp inflection point from morbidity to recovery on supplementation of one or another nutrient. In other cases, severe COVID-19 morbidity is significantly correlated with deficiency of a particular nutrient. Any of the nutrients that are discussed in this paper, when used alone or with a co-factor, has been either sufficient for prompt and complete recovery in a majority of patients treated or highly correlated with low morbidity and high survival from the disease.

If any one of several nutrients is adequate for victory over COVID-19, then logically (the contrapositive), the simultaneous deficiency of all of those same nutrients is the necessary preliminary condition for the subsequent presence of the virus to result in COVID-19 morbidity and mortality. This paper will show which nutrients are lacking in those with severe pathogenesis, and why all of those nutrients must be deficient in order for severe COVID-19 disease to occur in an individual, and that supplementation with any one of these nutrients is likely to result in recovery.

[Note: The full list of nutrients discussed in the paper are as follows:

  • Vitamin D
  • Glutathione
  • Zinc
  • Quercitin
  • Epigallocatechin-gallate (EGCG)
  • Vitamin C
  • Selenium

    Below are excerpts relating to the big 3 in the image above.]

Vitamin D vs COVID19

Vitamin D3 (commonly known simply as “vitamin D,” but formally as cholecalciferol) may be the most potent defense available against COVID-19, from the studies described below. It may also be the most easily acquired COVID-19 treatment, because vitamin D is produced in the skin on exposure to sunlight, with further processing in the liver and then in the kidneys to its fully useful form.

In this large Israeli study of over 7,000 people, “low plasma [vitamin D] levels almost doubled the risk for hospitalization due to the COVID-19 infection in the Israeli studied cohort.” Also, “the mean plasma vitamin D level was significantly lower among those who tested positive than negative for COVID-19.” (1)

In a retrospective cohort study in Indonesia of 780 cases of COVID-19 positive patients, it was found that those with below normal vitamin D levels were associated with increasing odds of death. (2)

The correlation among low serum vitamin D levels and COVID-19 mortality was so high in that study that this nutrient may turn out to be the most decisively valuable against COVID-19. This graph (3) shows the stark contrast found between high and low vitamin D levels and COVID-19 survivability.

​In European countries also, a significant inverse relationship was found between serum vitamin D levels and COVID-19 mortality. Mean levels of vitamin D and COVID-19 mortality in twenty European countries were examined. Also aging populations, which have been the worst affected by COVID-19 were found to have the lowest serum vitamin D levels. (4)

Vitamin D is known to be essential to the maturing of macrophages, which in turn are a necessary tool of the immune system against pathogenic microbes. Macrophages with vitamin D also produce hydrogen peroxide, an important pro-oxidant molecular weapon against microbial pathogens. (5) However, vitamin D also stimulates production of anti-microbial peptides that appear in natural killer cells and neutrophils in respiratory tract epithelial cells, where they are able to protect the lungs from the ravages of infection.

One of the most alarming features of COVID-19 disease in the clinical setting has been the “cytokine storm,” which is itself life-threatening. It is an inflammatory over-reaction to the replicating viral pathogen. The utility of Vitamin D for the COVID-19 patient may best be appreciated in its prevention of excessive inflammatory cytokines, thereby sparing the patient of the body’s most severe reactions to the virus. (6) Vitamin D deficiency is also implicated in acute respiratory distress syndrome. (7)

Respiratory infectious disease prevalence has a strong seasonality through the centuries and around the world. That season peaks in the winter and early spring, after the year’s fewest hours and lowest angle of sunlight on the winter solstice. That lack of sunlight occurs during a time of the least skin surface exposed to freezing weather, and therefore the least endogenous vitamin D production. Supplementation of oral vitamin D through this difficult season may therefore be a prudent prophylaxis.

Zinc vs COVID-19

​Zinc has many functions in the cell. One of these is to inhibit replication of RNA-type viruses. SARS-CoV-2 is such a virus. The mechanism is that zinc blocks the enzyme RNA-dependent RNA polymerase (RdRp). This enzyme is required for replication of the virus. Without this enzyme, copying of the viral RNA cannot occur. The virus’s assault against the body is not merely inhibited. It is stopped with adequate zinc.

Zinc, however, is mostly kept out of the cell by other mechanisms, partly because zinc plays a role in normal cell death. A survival mechanism of a normal cell is to therefore limit the zinc that can enter.

However, in the event of infection with an RNA virus, a useful strategy for medical treatment is to bring enough zinc into cells to block viral replication. What is needed is a substance that can accompany and transport zinc across the cell membrane and into the cell. Such a substance is an ionophore; it transports the zinc ion. The function is to allow more zinc into a cell than would typically enter. For this purpose, zinc ionophore agents are used in clinical settings together with zinc as a combination strategy against an RNA virus infection. I will discuss a few of these zinc ionophores.

It should also be noted that zinc deficiency is characterized by loss of senses of smell and/or taste. (11) These are also known to be common symptoms of COVID-19 patients. (12) This is further evidence that deficiency of zinc may be correlated with COVID-19 morbidity.

Zinc and Hydroxychloroquine vs COVID-19

Both hydroxychloroquine (HCQ) and its historical predecessor chloroquine (CQ) are on the World Health Organization’s List of Essential Medicines. The latter was discovered in 1934, and it is still used to manage malaria, although resistant strains of malaria make it less useful these days for that purpose. HCQ has been approved by the US Food and Drug Administration (FDA) for over 65 years. It has been prescribed billions of times throughout the world over the previous decades. The US Centers for Disease Control says that HCQ can be prescribed to adults and children of all ages. It can also be safely taken by pregnant women and nursing mothers. (13) It is among the safest of prescription drugs in the US, which is why it is sold over the counter through much of the world. (14) Both HCQ and CQ are chemically similar to quinine, from the bark of Cinchona trees, which is also a flavoring used in tonic water.

These drugs have been observed to raise the pH of the cell and the endosomes in which entering viruses are packaged. These drugs are easily taken up into the cells of the body. Viruses, however, enter cells packaged in endosomes, and require a low pH acidic environment in the endosome in order to replicate. Once HCQ or CQ are inside cells, they easily enter endosomes, and therefore viruses are stopped from replicating (reproducing) due to this alkalinizing effect. Dr. Peter D’Adamo describes and illustrates these mechanisms in more detail. (15)

So in summary of these functions then, HCQ and CQ not only shepherd zinc into the cell, where zinc blocks the enzyme that is required for replication of RNA viruses, but either of these drugs also raise pH inside the cell to a level where viral replication is impossible.

The combination of HCQ, azithromycin and zinc has shown outstanding results in resolving COVID-19. See: Positive HCQ Treatment Outcomes in 88 International Studies

Veteran virologist Steven Hatfill writes of hydroxychloroquine: The Real HCQ Story: What We Now Know

Yale epidemiology professor Harvey Risch, a highly respected scientist with over 300 published peer-reviewed studies, writes of the contrast between the successful clinical use of HCQ and zinc on the one hand, and its suppression by governments and industry on the other:

Dr. Risch, who has 39,779 citations on Google Scholar, adds that “US cumulative deaths through July 15, 2020 are 140,000. Had we permitted HCQ use liberally, we would have saved half, 70,000, and it is very possible we could have saved 3/4, or105,000.”

There are other zinc ionophores that are also being used together with zinc successfully against COVID-19.  Zinc and Quercitin vs COVID-19;  Zinc and EGCG vs COVID-19 (Epigallocatechin-gallate (EGCG) is a green tea extract)

Vitamin C vs COVID-19

At the Ruijing Hospital in Shanghai, 50 COVID-19 patients were treated with vitamin C. Their hospital stays were 5 days shorter than those COVID-19 patients not treated with Vitamin C. There were no deaths in the Vitamin C group, and no significant side effects were noted either. In the other group of COVID-19 patients, those who did not receive vitamin C, there were 3 deaths. (25)

Dr. Zhiyong Peng conducted the first clinical trial of high-dose intravenous vitamin C with COVID19 patients at Wuhan University in Wuhan, China. His findings were that this treatment of COVID-19 patients reduced their inflammation significantly, and that it reduced their stays in ICU and hospitals. (26) (27)

Vitamin C should be no surprise as an addition to the list of nutrients that provide life-saving help against COVID-19. Dr. Fred Klenner wrote in 1948 about use of intravenous and intramuscular use of vitamin C against viral pneumonia: “In almost every case the patient felt better within an hour after the first injection and noted a very definite change after two hours.” And “three to seven injections gave complete clinical and x-ray response in all of our [42] cases.” (28)

Vitamin C has numerous well-studied and documented mechanisms against viruses. Perhaps the most important of these is the production of Type I interferons. (31) This in turn upregulates natural killer cells and cytotoxic T-lymphocytes for anti-viral activity. (32) However, it has been shown to simply inactivate both RNA and DNA viruses. (33) It also detoxifies viral products that are associated with inflammation and pain. High dose vitamin C and oral doses over 3 grams are established to both prevent and treat a variety of viral infections. (34) (35)

Conclusion

To repeat Dr. Thomas Levy’s memorable quote at the beginning of this paper, “There already exist numerous ways to reliably prevent, mitigate, and even cure COVID-19, including in late-stage patients who are already ventilator-dependent.” Dr. Levy documents many of them in this paper. (38)

Those who were diagnosed and sickened from the most feared viral pathogen of our time fell into several categories. Either they died from the disease, or they healed from one of the interventions discussed in this paper, or fortunately, healed with none of those interventions. The above studies show that it was enough that one or the other of the nutrients discussed herein was adequate to prevent or to vanquish COVID-19, without the need to use all of them. Therefore, because any one of these nutrients proved adequate to heal patients to complete recovery, then the patients who succumbed to COVID-19 disease had likely been deficient in all of these nutrients, and a lack of all of these nutrients was likely a necessary condition for pathogenesis of COVID-19.

Because of the therapeutic impact and success that each of the above nutrients have had in reversing the devastation of COVID-19, they each must be made available immediately and widely throughout the world, for both preventative and prompt therapeutic uses. There is therefore no need or justification for pandemic status of COVID-19. Furthermore, nutritional interventions should be used without hesitation as first-line treatment, as well as prevention, of COVID-19.

 

Fires in US West: History vs. Hysteria

People who struggle with anxiety are known to have moments of “hair on fire.” IOW, letting your fears take over is like setting your own hair on fire. Currently the media, pandering as always to primal fear instincts, is declaring that the US West is on fire, and it is our fault. Let’s see what we can do to help them get a grip.

First the media hysteria.

BAY AREA ON SEPTEMBER 9, 2020. IMAGE: BAY AREA AIR QUALITY

The Headlines are Screaming!

Why wildfire smoke can turn the sky orange and damage your lungs Vox18:31

A 2006 Heat Wave Was a Wake-Up Call. Why Didn’t L.A. Pay Attention? Curbed18:25

Wildfires and weather extremes: It’s not coincidence, it’s climate change CBS News18:20

Trillions up in smoke: The staggering economic cost of climate change inaction The New Daily18:09

‘Zombie Fires’ May Have Sparked Record High Carbon Emissions in the Arctic Smithsonian Magazine17:33

How Did the Wildfires Start? Here’s What You Need to Know The New York Times17:32

‘It Looks Like Doomsday’: California Residents React to Orange Sky The New York Times17:20

Apocalyptic Orange Haze And Darkness Blanket California Amid Fires HuffPost (US)16:46

Fire experts: Western fires are ‘unreal’ Mashable16:00

Devastating Wildfires Ravage The West HuffPost (US)14:25

‘Entire Western US on Fire’ as Region Faces Deadly Flames Compounded by Heatwave, Blackouts, and Coronavirus Common Dreams13:24

Unprecedented Wildfires Turn California Skies Orange Vice (US)13:43

Now the History.  Are these wildfires “unprecedented?”

The National Interagency Fire Center provides the facts and historical context.  Here are the details on this year and the last decade.

Note that for the year-to-date, 2020 is below average both for number of fires and acreage.  Three years were over 8M acres at this point in the year.  And one of them (2012) was also an election year, but lacked the current media fury politicizing everything.

Annual Number of Acres Burned in US Wildland Fires, 1980-2019

Background Information from Previous Post Arctic on Fire! Not.

1. Summer is fire season for northern boreal forests and tundra.

From the Canadian National Forestry Database

Since 1990, “wildland fires” across Canada have consumed an average of 2.5 million hectares a year.

Recent Canadian Forest Fire Activity 2015 2016 2017
Area burned (hectares) 3,861,647 1,416,053 3,371,833
Number of fires 7,140 5,203 5,611

The total area of Forest and other wooded land in Canada  is 396,433,600 (hectares).  So the data says that every average year 0.6% of Canadian wooded area burns due to numerous fires, ranging from 1000 in a slow year to over 10,000 fires and 7M hectares burned in 1994.

2. With the warming since 1980 some years have seen increased areas burning.

From Wildland Fire in High Latitudes, A. York et al. (2017)

Despite the low annual temperatures and short growing seasons characteristic of northern ecosystems, wildland fire affects both boreal forest (the broad band of mostly coniferous trees that generally stretches across the area north of the July 13° C isotherm in North America and Eurasia, also known as Taiga) and adjacent tundra regions. In fact, fire is the dominant ecological disturbance in boreal forest, the world’s largest terrestrial biome. Fire disturbance affects these high latitude systems at multiple scales, including direct release of carbon through combustion (Kasischke et al., 2000) and interactions with vegetation succession (Mann et al., 2012; Johnstone et al., 2010), biogeochemical cycles (Bond-Lamberty et al., 2007), energy balance (Rogers et al., 2015), and hydrology (Liu et al., 2005). About 35% of global soil carbon is stored in tundra and boreal systems (Scharlemann et al., 2014) that are potentially vulnerable to fire disturbance (Turetsky et al., 2015). This brief report summarizes evidence from Alaska and Canada on variability and trends in fire disturbance in high latitudes and outlines how short-term fire weather conditions in these regions influence area burned.

Climate is a dominant control of fire activity in both boreal and tundra ecosystems. The relationship between climate and fire is strongly nonlinear, with the likelihood of fire occurrence within a 30-year period much higher where mean July temperatures exceed 13.4° C (56° F) (Young et al., 2017). High latitude fire regimes appear to be responding rapidly to recent environmental changes associated with the warming climate. Although highly variable, area burned has increased over the past several decades in much of boreal North America (Kasischke and Turetsky, 2006; Gillett et al., 2004). Since the early 1960s, the number of individual fire events and the size of those events has increased, contributing to more frequent large fire years in northwestern North America (Kasischke and Turetsky, 2006). Figure 1 shows annual area burned per year in Alaska (a) and Northwest Territories (b) since 1980, including both boreal and tundra regions.

[Comment: Note that both Alaska and NW Territories see about 500k hectares burned on average each year since 1980.  And in each region, three years have been much above that average, with no particular pattern as to timing.]

Recent large fire seasons in high latitudes include 2014 in the Northwest Territories, where 385 fires burned 8.4 million acres, and 2015 in Alaska, where 766 fires burned 5.1 million acres (Figs. 1 & 2)—more than half the total acreage burned in the US (NWT, 2015; AICC, 2015). Multiple northern communities have been threatened or damaged by recent wildfires, notably Fort McMurray, Alberta, where 88,000 people were evacuated and 2400 structures were destroyed in May 2016. Examples of recent significant tundra fires include the 2007 Anaktuvuk River Fire, the largest and longest-burning fire known to have occurred on the North Slope of Alaska (256,000 acres), which initiated widespread thermokarst development (Jones et al., 2015). An unusually large tundra fire in western Greenland in 2017 received considerable media attention.

Large fire events such as these require the confluence of receptive fuels that will promote fire growth once ignited, periods of warm and dry weather conditions, and a source of ignition—most commonly, convective thunderstorms that produce lightning ignitions. High latitude ecosystems are characterized by unique fuels—in particular, fast-drying beds of mosses, lichens, resinous shrubs, and accumulated organic material (duff) that underlie dense, highly flammable conifers. These understory fuels cure rapidly during warm, dry periods with long daylight hours in June and July. Consequently, extended periods of drought are not required to increase fire danger to extreme levels in these systems.

Most acreage burned in high latitude systems occurs during sporadic periods of high fire activity; 50% of the acreage burned in Alaska from 2002 to 2010 was consumed in just 36 days (Barrett et al., 2016). Figure 3 shows cumulative acres burned in the four largest fire seasons in Alaska since 1990 (from Fig. 1) and illustrates the varying trajectories of each season. Some seasons show periods of rapid growth during unusually warm and dry weather (2004, 2009, 2015), while others (2004 and 2005) were prolonged into the fall in the absence of season-ending rain events. In 2004, which was Alaska’s largest wildfire season at 6.6 million acres, the trajectory was characterized by both rapid mid-season growth and extended activity into September. These different pathways to large fire seasons demonstrate the importance of intraseasonal weather variability and the timing of dynamical features. As another example, although not large in total acres burned, the 2016 wildland fire season in Alaska was more than 6 months long, with incidents requiring response from mid-April through late October (AICC, 2016).

3. Wildfires are part of the ecology cycle making the biosphere sustainable.

Forest Fire Ecology: Fire in Canada’s forests varies in its role and importance.

In the moist forests of the west coast, wildland fires are relatively infrequent and generally play a minor ecological role.

In boreal forests, the complete opposite is true. Fires are frequent and their ecological influence at all levels—species, stand and landscape—drives boreal forest vegetation dynamics. This in turn affects the movement of wildlife populations, whose need for food and cover means they must relocate as the forest patterns change.

lThe Canadian boreal forest is a mosaic of species and stands. It ranges in composition from pure deciduous and mixed deciduous-coniferous to pure coniferous stands.

The diversity of the forest mosaic is largely the result of many fires occurring on the landscape over a long period of time. These fires have varied in frequency, intensity, severity, size, shape and season of burn.

The fire management balancing act: Fire is a vital ecological component of Canadian forests and will always be present.

Not all wildland fires should (or can) be controlled. Forest agencies work to harness the force of natural fire to take advantage of its ecological benefits while at the same time limiting its potential damage and costs.

Tundra Fire Ecology

From Arctic tundra fires: natural variability and responses to climate change, Feng Sheng Hu et al. (2015)

Circumpolar tundra fires have primarily occurred in the portions of the Arctic with warmer summer conditions, especially Alaska and northeastern Siberia (Figure 1). Satellite-based estimates (Giglio et al. 2010; Global Fire Emissions Database 2015) show that for the period of 2002–2013, 0.48% of the Alaskan tundra has burned, which is four times the estimate for the Arctic as a whole (0.12%; Figure 1). These estimates encompass tundra ecoregions with a wide range of fire regimes. For instance, within Alaska, the observational record of the past 60 years indicates that only 1.4% of the North Slope ecoregion has burned (Rocha et al. 2012); 68% of the total burned area in this ecoregion was associated with a single event, the 2007 AR Fire.

The Noatak and Seward Peninsula ecoregions are the most flammable of the tundra biome, and both contain areas that have experienced multiple fires within the past 60 years (Rocha et al. 2012). This high level of fire activity suggests that fuel availability has not been a major limiting factor for fire occurrence in some tundra regions, probably because of the rapid post-fire recovery of tundra vegetation (Racine et al. 1987; Bret-Harte et al. 2013) and the abundance of peaty soils.

However, the wide range of tundra-fire regimes in the modern record results from spatial variations in climate and fuel conditions among ecoregions. For example, frequent tundra burning in the Noatak ecoregion reflects relatively warm/dry climate conditions, whereas the extreme rarity of tundra fires in southwestern Alaska reflects a wet regional climate and abundant lakes that act as natural firebreaks.

Fire alters the surface properties, energy balance, and carbon (C) storage of many terrestrial ecosystems. These effects are particularly marked in Arctic tundra (Figure 5), where fires can catalyze biogeochemical and energetic processes that have historically been limited by low temperatures.

In contrast to the long-term impacts of tundra fires on soil processes, post-fire vegetation recovery is unexpectedly rapid. Across all burned areas in the Alaskan tundra, surface greenness recovered within a decade after burning (Figure 6; Rocha et al. 2012). This rapid recovery was fueled by belowground C reserves in roots and rhizomes, increased nutrient availability from ash, and elevated soil temperatures.

At present, the primary objective for wildland fire management in tundra ecosystems is to maintain biodiversity through wildland fires while also protecting life, property, and sensitive resources. In Alaska, the majority of Arctic tundra is managed under the “Limited Protection” option, and most natural ignitions are managed for the purpose of preserving fire in its natural role in ecosystems. Under future scenarios of climate and tundra burning, managing tundra fire is likely to become increasingly complex. Land managers and policy makers will need to consider trade-offs between fire’s ecological roles and its socioeconomic impacts.

4. Arctic fire regimes involve numerous interacting factors.

Frequent Fires in Ancient Shrub Tundra: Implications of Paleorecords for Arctic Environmental Change
Philip E. Higuera et al. (2008)

Although our fire-history records provide unique insights into the potential response of modern tundra ecosystems to climate and vegetation change, they are imperfect analogs for future fire regimes. First, ongoing vegetation changes differ from those of the late-glacial period: several shrub taxa (Salix, Alnus, and Betula) are currently expanding into tundra [10], whereas Betula was the primary constituent of the ancient shrub tundra. The lower flammability of Alnus and Salix compared to Betula could make future shrub tundra less flammable than the ancient shrub tundra. Second, mechanisms of past and future climate change also differ. In the late-glacial and early-Holocene periods, Alaskan climate was responding to shrinking continental ice volumes, sea-level changes, and amplified seasonality arising from changes in the seasonal cycle of insolation [13]; in the future, increased concentrations of atmospheric greenhouse gases are projected to cause year-round warming in the Arctic, but with a greater increase in winter months [8]. Finally, we know little about the potential effects of a variety of biological and physical processes on climate-vegetation-fire interactions. For example, permafrost melting as a result of future warming [8] and/or increased burning [24] could further facilitate fires by promoting shrub expansion [10], or inhibit fires by increasing soil moisture [24].

5. The Arctic has adapted to many fire regimes stronger than today’s activity.

The Burning Tundra: A Look Back at the Last 6,000 Years of Fire in the Noatak National Preserve, Northwestern Alaska

Fire history in the Noatak also suggests that subtle changes in vegetation were linked to changes in tundra fire occurrence. Spatial variability across the study region suggests that vegetation responded to local-scale climate, which in turn influenced the flammability of surrounding areas. This work adds to evidence from ‘ancient’ shrub tundra in the southcentral Brooks Range suggesting that vegetation change will likely modify tundra fire regimes, and it further suggests that the direction of this impact will depend upon the specific makeup of future tundra vegetation. Ongoing climate-related vegetation change in arctic tundra such as increasing shrub abundance in response to warming temperatures (e.g., Tape et al. 2006), could both increase (e.g., birch) or decrease (e.g., alder) the probability of future tundra fires.

This study provides estimated fire return intervals (FRIs) for one of the most flammable tundra ecosystems in Alaska. Fire managers require this basic information, and it provides a valuable context for ongoing and future environmental change. At most sites, FRIs varied through time in response to changes in climate and local vegetation. Thus, an individual mean or median FRI does not capture the range of variability in tundra fire occurrence. Long-term mean FRIs in many periods were both shorter than estimates based on the past 60 years and statistically indistinct from mean FRIs found in Alaskan boreal forests (e.g., Higuera et al. 2009) (Figure 2). These results imply that tundra ecosystems have been resilient to relatively frequent burning over the past 6,000 years, which has implications for both managers and scientists concerned about environmental change in tundra ecosystems. For example, increased tundra fire occurrence could negatively impact winter forage for the Western Arctic Caribou Herd (Joly et al. 2009). Although the Noatak is only a portion of this herd’s range, our results indicate that if caribou utilized the study area over the past 6,000 years, then they have successfully co-existed with relatively frequent fire.

N. Atlantic August 2020

RAPID Array measuring North Atlantic SSTs.

For the last few years, observers have been speculating about when the North Atlantic will start the next phase shift from warm to cold. The way 2018 went and 2019 followed suggested this may be the onset.  However, 2020 started out against that trend and is matching 2016 the last warm year.  First some background.

. Source: Energy and Education Canada

An example is this report in May 2015 The Atlantic is entering a cool phase that will change the world’s weather by Gerald McCarthy and Evan Haigh of the RAPID Atlantic monitoring project. Excerpts in italics with my bolds.

This is known as the Atlantic Multidecadal Oscillation (AMO), and the transition between its positive and negative phases can be very rapid. For example, Atlantic temperatures declined by 0.1ºC per decade from the 1940s to the 1970s. By comparison, global surface warming is estimated at 0.5ºC per century – a rate twice as slow.

In many parts of the world, the AMO has been linked with decade-long temperature and rainfall trends. Certainly – and perhaps obviously – the mean temperature of islands downwind of the Atlantic such as Britain and Ireland show almost exactly the same temperature fluctuations as the AMO.

Atlantic oscillations are associated with the frequency of hurricanes and droughts. When the AMO is in the warm phase, there are more hurricanes in the Atlantic and droughts in the US Midwest tend to be more frequent and prolonged. In the Pacific Northwest, a positive AMO leads to more rainfall.

A negative AMO (cooler ocean) is associated with reduced rainfall in the vulnerable Sahel region of Africa. The prolonged negative AMO was associated with the infamous Ethiopian famine in the mid-1980s. In the UK it tends to mean reduced summer rainfall – the mythical “barbeque summer”.Our results show that ocean circulation responds to the first mode of Atlantic atmospheric forcing, the North Atlantic Oscillation, through circulation changes between the subtropical and subpolar gyres – the intergyre region. This a major influence on the wind patterns and the heat transferred between the atmosphere and ocean.

The observations that we do have of the Atlantic overturning circulation over the past ten years show that it is declining. As a result, we expect the AMO is moving to a negative (colder surface waters) phase. This is consistent with observations of temperature in the North Atlantic.

Cold “blobs” in North Atlantic have been reported, but they are usually winter phenomena. For example in April 2016, the sst anomalies looked like this

But by September, the picture changed to this

And we know from Kaplan AMO dataset, that 2016 summer SSTs were right up there with 1998 and 2010 as the highest recorded.

As the graph above suggests, this body of water is also important for tropical cyclones, since warmer water provides more energy.  But those are annual averages, and I am interested in the summer pulses of warm water into the Arctic. As I have noted in my monthly HadSST3 reports, most summers since 2003 there have been warm pulses in the north Atlantic, and 2020 is another of them.

The AMO Index is from from Kaplan SST v2, the unaltered and not detrended dataset. By definition, the data are monthly average SSTs interpolated to a 5×5 grid over the North Atlantic basically 0 to 70N.  The graph shows the warmest month August beginning to rise after 1993 up to 1998, with a series of matching years since.  The El Nino years of 2010 and 2016 are obvious, now matched by 2020, but without the Pacific warming.

Because McCarthy refers to hints of cooling to come in the N. Atlantic, let’s take a closer look at some AMO years in the last 2 decades.

The 2020 North Atlantic Surprise
This graph shows monthly AMO temps for some important years. The Peak years were 1998, 2010 and 2016, with the latter emphasized as the most recent. The other years show lesser warming, with 2007 emphasized as the coolest in the last 20 years. Note the red 2018 line was at the bottom of all these tracks.  2019 began slightly cooler than January 2018, then tracked closely before rising in the summer months.  Through December 2019 tracked warmer than 2018 but cooler than other recent years in the North Atlantic.

In 2020 following a warm January, N. Atlantic temps in February, March and April were the highest in the record. Now Summer 2020 temps are as as 2016 and 2017.  That is a concern for the current hurricane season, along with the lack of a Pacific El Nino providing wind shear against developing tropical storms.

More recently, temps in higher Atlantic latitudes (45N to 65N) have warmed rapidly, as shown in this graph and map from Tropical Tidbits (Levi Cowan)

Footnote:  Levi Cowan’s Tropical Tidbits is an excellent source of information regarding tropical storm activity, even before disturbances are assigned names, as well as ones like tropical storm Paulette now in mid-Atlantic moving toward the US east coast.

The Truth About CV Tests

The peoples’ instincts are right, though they have been kept in the dark about this “pandemic” that isn’t.  Responsible citizens are starting to act out their outrage from being victimized by a medical-industrial complex (to update Eisenhower’s warning decades ago).  The truth is, governments are not justified to take away inalienable rights to life, liberty and the pursuit of happiness.  There are several layers of disinformation involved in scaring the public.  This post digs into the CV tests, and why the results don’t mean what the media and officials claim.

For months now, I have been updating the progress in Canada of the CV outbreak.  A previous post later on goes into the details of extracting data on tests, persons testing positive (termed “cases” without regard for illness symptoms) and deaths after testing positive.  Currently, the contagion looks like this.

The graph shows that deaths are less than 5 a day, compared to a daily death rate of 906 in Canada from all causes.  Also significant is the positivity ratio:  the % of persons testing positive out of all persons tested each day.  That % has been fairly steady for months now:  1% positive means 99% of people are not infected. And this is despite more than doubling the rate of testing.

But what does testing positive actually mean?  Herein lies more truth that has been hidden from the public for the sake of an agenda to control free movement and activity.  Background context comes from  Could Rapid Coronavirus Testing Help Life Return To Normal?, an interview at On Point with Dr. Michael Mina.  Excerpts in italics with my bolds. H/T Kip Hansen

A sign displays a new rapid coronavirus test on the new Abbott ID Now machine at a ProHEALTH center in Brooklyn on August 27, 2020 in New York City. (Spencer Platt/Getty Images)

Dr. Michael Mina:

COVID tests can actually be put onto a piece of paper, very much like a pregnancy test. In fact, it’s almost exactly like a pregnancy test. But instead of looking for the hormones that tell if somebody is pregnant, it looks for the virus proteins that are part of SA’s code to virus. And it would be very simple: You’d either swab the front of your nose or you’d take some saliva from under your tongue, for example, and put it onto one of these paper strips, essentially. And if you see a line, it means you’re positive. And if you see no line, it means you are negative, at least for having a high viral load that could be transmissible to other people.

An antigen is one of the proteins in the virus. And so unlike the PCR test, which is what most people who have received a test today have generally received a PCR test. And looking those types of tests look for the genome of the virus to RNA and you could think of RNA the same way that humans have DNA. This virus has RNA. But instead of looking for RNA like the PCR test, these antigen tests look for pieces of the protein. It would be like if I wanted a test to tell me, you know, that somebody was an individual, it would actually look for features like their eyes or their nose. And in this case, it is looking for different parts of the virus. In general, the spike protein or the nuclear capsid, these are two parts of the virus.

The reason that these antigen tests are going to be a little bit less sensitive to detect the virus molecules is because there’s no step that we call an amplification step. One of the things that makes the PCR test that looks for the virus RNA so powerful is that it can take just one molecule, which the sensor on the machine might not be able to detect readily, but then it amplifies that molecule millions and millions of times so that the sensor can see it. These antigen tests, because they’re so simple and so easy to use and just happen on a piece of paper, they don’t have that amplification step right now. And so they require a larger amount of virus in order to be able to detect it. And that’s why I like to think of these types of tests having their primary advantage to detect people with enough virus that they might be transmitting or transmissible to other people.”

The PCR test, provides a simple yes/no answer to the question of whether a patient is infected.
Source: Covid Confusion On PCR Testing: Maybe Most Of Those Positives Are Negatives.

Similar PCR tests for other viruses nearly always offer some measure of the amount of virus. But yes/no isn’t good enough, Mina added. “It’s the amount of virus that should dictate the infected patient’s next steps. “It’s really irresponsible, I think, to [ignore this]” Dr. Mina said, of how contagious an infected patient may be.

We’ve been using one type of data for everything,” Mina said. “for [diagnosing patients], for public health, and for policy decision-making.”

The PCR test amplifies genetic matter from the virus in cycles; the fewer cycles required, the greater the amount of virus, or viral load, in the sample. The greater the viral load, the more likely the patient is to be contagious.

This number of amplification cycles needed to find the virus, called the cycle threshold, is never included in the results sent to doctors and coronavirus patients, although if it was, it could give them an idea of how infectious the patients are.

One solution would be to adjust the cycle threshold used now to decide that a patient is infected. Most tests set the limit at 40, a few at 37. This means that you are positive for the coronavirus if the test process required up to 40 cycles, or 37, to detect the virus.

Any test with a cycle threshold above 35 is too sensitive, Juliet Morrison, a virologist at the University of California, Riverside told the New York Times. “I’m shocked that people would think that 40 could represent a positive,” she said.

A more reasonable cutoff would be 30 to 35, she added. Dr. Mina said he would set the figure at 30, or even less.

Another solution, researchers agree, is to use even more widespread use of Rapid Diagnostic Tests (RDTs) which are much less sensitive and more likely to identify only patients with high levels of virus who are a transmission risk.

Comment:  In other words, when they analyzed the tests that also reported cycle threshold (CT), they found that 85 to 90 percent were above 30. According to Dr. Mina a CT of 37 is 100 times too sensitive (7 cycles too much, 2^7 = 128) and a CT of 40 is 1,000 times too sensitive (10 cycles too much, 2^10 = 1024). Based on their sample of tests that also reported CT, as few as 10 percent of people with positive PCR tests actually have an active COVID-19 infection. Which is a lot less than reported.

Here is a graph showing how this applies to Canada.

It is evident that increased testing has resulted in more positives, while the positivity rate is unchanged. Doubling the tests has doubled the positives, up from 300 a day to nearly 600 a day presently.  Note these are PCR results. And the discussion above suggests that the number of persons with an active infectious viral load is likely 10% of those reported positive: IOW up from 30 a day to 60 a day.  And in the graph below, the total of actual cases in Canada is likely on the order of 13,000 total from the last 7 months, an average of 62 cases a day.

WuFlu Exposes a Fundamental Flaw in US Health System

Dr. Mina goes on to explain what went wrong in US response to WuFlu:

In the U.S, we have a major focus on clinical medicine, and we have undervalued and underfunded the whole concept of public health for a very long time. We saw an example of this for, for example, when we tried to get the state laboratories across the country to be able to perform the PCR tests back in March, February and March, we very quickly realized that our public health infrastructure in this country just wasn’t up to the task. We had very few labs that were really able to do enough testing to just meet the clinical demands. And so such a reduced focus on public health for so long has led to an ecosystem where our regulatory agencies, this being primarily the FDA, has a mandate to approve clinical medical diagnostic tests. But there’s actually no regulatory pathway that is available or exists — and in many ways, we don’t even have a language for it — for a test whose primary purpose is one of public health and not personal medical health

That’s really caused a problem. And a lot of times, it’s interesting if you think about the United States, every single test that we get, with the exception maybe of a pregnancy test, has to go through a physician. And so that’s a symptom of a country that has focused, and a society really, that has focused so heavily on the medical industrial complex. And I’m part of that as a physician. But I also am part of the public health complex as an epidemiologist. And I see that sometimes these are at odds with each other, medicine and public health. And this is an example where because all of our regulatory infrastructure is so focused on medical devices… If you’re a public health person, you can actually have a huge amount of leeway in how your tests are working and still be able to get epidemics under control. And so there’s a real tension here between the regulations that would be required for these types of tests versus a medical diagnostic test.

Footnote:  I don’t think the Chinese leaders were focusing on the systemic weakness Dr. MIna mentions.  But you do have to bow to the inscrutable cleverness of the Chinese Communists releasing WuFlu as a means to set internal turmoil within democratic capitalist societies.  On one side are profit-seeking Big Pharma, aided and abetted by Big Media using fear to attract audiences for advertising revenues.  The panicked public demands protection which clueless government provides by shutting down the service and manufacturing industries, as well as throwing money around and taking on enormous debt.  The world just became China’s oyster.

Background from Previous Post: Covid Burnout in Canada August 28

The map shows that in Canada 9108 deaths have been attributed to Covid19, meaning people who died having tested positive for SARS CV2 virus.  This number accumulated over a period of 210 days starting January 31. The daily death rate reached a peak of 177 on May 6, 2020, and is down to 6 as of yesterday.  More details on this below, but first the summary picture. (Note: 2019 is the latest demographic report)

Canada Pop Ann Deaths Daily Deaths Risk per
Person
2019 37589262 330786 906 0.8800%
Covid 2020 37589262 9108 43 0.0242%

Over the epidemic months, the average Covid daily death rate amounted to 5% of the All Causes death rate. During this time a Canadian had an average risk of 1 in 5000 of dying with SARS CV2 versus a 1 in 114 chance of dying regardless of that infection. As shown later below the risk varied greatly with age, much lower for younger, healthier people.

Background Updated from Previous Post

In reporting on Covid19 pandemic, governments have provided information intended to frighten the public into compliance with orders constraining freedom of movement and activity. For example, the above map of the Canadian experience is all cumulative, and the curve will continue upward as long as cases can be found and deaths attributed.  As shown below, we can work around this myopia by calculating the daily differentials, and then averaging newly reported cases and deaths by seven days to smooth out lumps in the data processing by institutions.

A second major deficiency is lack of reporting of recoveries, including people infected and not requiring hospitalization or, in many cases, without professional diagnosis or treatment. The only recoveries presently to be found are limited statistics on patients released from hospital. The only way to get at the scale of recoveries is to subtract deaths from cases, considering survivors to be in recovery or cured. Comparing such numbers involves the delay between infection, symptoms and death. Herein lies another issue of terminology: a positive test for the SARS CV2 virus is reported as a case of the disease COVID19. In fact, an unknown number of people have been infected without symptoms, and many with very mild discomfort.

August 7 in the UK it was reported (here) that around 10% of coronavirus deaths recorded in England – almost 4,200 – could be wiped from official records due to an error in counting.  Last month, Health Secretary Matt Hancock ordered a review into the way the daily death count was calculated in England citing a possible ‘statistical flaw’.  Academics found that Public Health England’s statistics included everyone who had died after testing positive – even if the death occurred naturally or in a freak accident, and after the person had recovered from the virus.  Numbers will now be reconfigured, counting deaths if a person died within 28 days of testing positive much like Scotland and Northern Ireland…

Professor Heneghan, director of the Centre for Evidence-Based Medicine at Oxford University, who first noticed the error, told the Sun:

‘It is a sensible decision. There is no point attributing deaths to Covid-19 28 days after infection…

For this discussion let’s assume that anyone reported as dying from COVD19 tested positive for the virus at some point prior. From the reasoning above let us assume that 28 days after testing positive for the virus, survivors can be considered recoveries.

Recoveries are calculated as cases minus deaths with a lag of 28 days. Daily cases and deaths are averages of the seven days ending on the stated date. Recoveries are # of cases from 28 days earlier minus # of daily deaths on the stated date. Since both testing and reports of Covid deaths were sketchy in the beginning, this graph begins with daily deaths as of April 24, 2020 compared to cases reported on March 27, 2020.

The line shows the Positivity metric for Canada starting at nearly 8% for new cases April 24, 2020. That is, for the 7 day period ending April 24, there were a daily average of 21,772 tests and 1715 new cases reported. Since then the rate of new cases has dropped down, now holding steady at ~1% since mid-June. Yesterday, the daily average number of tests was 45,897 with 427 new cases. So despite more than doubling the testing, the positivity rate is not climbing.  Another view of the data is shown below.

The scale of testing has increased and now averages over 45,000 a day, while positive tests (cases) are hovering at 1% positivity.  The shape of the recovery curve resembles the case curve lagged by 28 days, since death rates are a small portion of cases.  The recovery rate has grown from 83% to 99% steady over the last 2 weeks, so that recoveries exceed new positives. This approximation surely understates the number of those infected with SAR CV2 who are healthy afterwards, since antibody studies show infection rates multiples higher than confirmed positive tests (8 times higher in Canada).  In absolute terms, cases are now down to 427 a day and deaths 6 a day, while estimates of recoveries are 437 a day.

The key numbers: 

99% of those tested are not infected with SARS CV2. 

99% of those who are infected recover without dying.

Summary of Canada Covid Epidemic

It took a lot of work, but I was able to produce something akin to the Dutch advice to their citizens.

The media and governmental reports focus on total accumulated numbers which are big enough to scare people to do as they are told.  In the absence of contextual comparisons, citizens have difficulty answering the main (perhaps only) question on their minds:  What are my chances of catching Covid19 and dying from it?

A previous post reported that the Netherlands parliament was provided with the type of guidance everyone wants to see.

For canadians, the most similar analysis is this one from the Daily Epidemiology Update: :

The table presents only those cases with a full clinical documentation, which included some 2194 deaths compared to the 5842 total reported.  The numbers show that under 60 years old, few adults and almost no children have anything to fear.

Update May 20, 2020

It is really quite difficult to find cases and deaths broken down by age groups.  For Canadian national statistics, I resorted to a report from Ontario to get the age distributions, since that province provides 69% of the cases outside of Quebec and 87% of the deaths.  Applying those proportions across Canada results in this table. For Canada as a whole nation:

Age  Risk of Test +  Risk of Death Population
per 1 CV death
<20 0.05% None NA
20-39 0.20% 0.000% 431817
40-59 0.25% 0.002% 42273
60-79 0.20% 0.020% 4984
80+ 0.76% 0.251% 398

In the worst case, if you are a Canadian aged more than 80 years, you have a 1 in 400 chance of dying from Covid19.  If you are 60 to 80 years old, your odds are 1 in 5000.  Younger than that, it’s only slightly higher than winning (or in this case, losing the lottery).

As noted above Quebec provides the bulk of cases and deaths in Canada, and also reports age distribution more precisely,  The numbers in the table below show risks for Quebecers.

Age  Risk of Test +  Risk of Death Population
per 1 CV death
0-9 yrs 0.13% 0 NA
10-19 yrs 0.21% 0 NA
20-29 yrs 0.50% 0.000% 289,647
30-39 0.51% 0.001% 152,009
40-49 years 0.63% 0.001% 73,342
50-59 years 0.53% 0.005% 21,087
60-69 years 0.37% 0.021% 4,778
70-79 years 0.52% 0.094% 1,069
80-89 1.78% 0.469% 213
90  + 5.19% 1.608% 62

While some of the risk factors are higher in the viral hotspot of Quebec, it is still the case that under 80 years of age, your chances of dying from Covid 19 are better than 1 in 1000, and much better the younger you are.

Positive HCQ Treatment Outcomes in 88 International Studies

The report is Early treatment with hydroxychloroquine: a country-based analysis at C19study.com.  Excerpts in italics with my bolds.

Many countries either adopted or declined early treatment with HCQ, effectively forming a large trial with 1.8 billion people in the treatment group and 663 million in the control group. As of September 6, 2020, an average of 57.4 per million in the treatment group have died, and 466.4 per million in the control group, relative risk 0.123. After adjustments, treatment and control deaths become 119.6 per million and 694.7 per million, relative risk 0.17. The probability of an equal or lower relative risk occurring from random group assignments is 0.008. Accounting for predicted changes in spread, we estimate a relative risk of 0.24. The treatment group has a 76.2% lower death rate. Confounding factors affect this estimate. We examined diabetes, obesity, hypertension, life expectancy, population density, urbanization, testing level, and intervention level, which do not account for the effect observed.

The treatment group countries generally show significantly slower growth in mortality which may be due to treatment, interventions, differences in culture, or the initial degree of infections arriving into the country. Over time we expect that increasingly similar percentages of people will have been exposed, since it is unlikely that the virus will be eliminated soon.

To account for future spread, we created an estimate of the future adjusted deaths per million for each country, 90 days in the future, based on a second degree polynomial fit according to the most recent 30 days, enforcing the requirement that deaths do not decrease, and using an assumption of a progressively decreasing maximum increase over time. Figure 5 shows the results, which predicts a future relative risk of 0.24, i.e., the treatment group has 76.2% lower chance of death.

Treatment groups.

Entire countries made different decisions regarding treatment with HCQ based on the same information, thereby assigning their residents to the treatment or control group in advance. Since assignment is done without regard to individual information such as medical status, assignment of individuals is random for the purposes of this study.

We focus here on countries that chose and maintained a clear assignment to one of the groups for a majority of the duration of their outbreak, either adopting widespread use, or highly limiting use. Some countries have very mixed usage, and some countries have joined or left the treatment group during their outbreak. We searched government web sites, Twitter, and Google, with the assistance of several experts in HCQ usage, to confirm assignment to the treatment or control group, locating a total of 225 relevant references, shown in Appendix 12. We excluded countries with <1M population, and countries with <0.5% of people over the age of 80. COVID-19 disproportionately affects older people and the age based adjustments are less reliable when there are very few people in the high-risk age groups. We also excluded countries that quickly adopted aggressive intervention and isolation strategies and consequently have very little spread of the virus to date. This exclusion, based on analysis by [Leffler], favors the control group and is discussed in detail below. We also present results without these exclusions for comparison.

Collectively the countries we identified with stable and relatively clear assignments account for 31.1% of the world population (2.4B of 7.8B). Details of the groups and evidence, including countries identified as having mixed use of HCQ, can be found in Appendix 12.

Case statistics.

We analyze deaths rather than cases because case numbers are highly dependent on the degree of testing effort, criteria for testing, the accuracy and availability of tests, accuracy of reporting, and because there is very high variability in case severity, including a high percentage of asymptomatic cases.

Co-administered treatments.

Several theories exist for why HCQ is effective [Andreani, Brufsky, Clementi, de Wilde, Derendorf, Devaux, Grassin-Delyle, Hoffmann, Hu, Keyaerts, Kono, Liu, Pagliano, Savarino, Savarino (B), Scherrmann, Sheaff, Vincent, Wang, Wang (B)], some of which involve co-administration of other medication or supplements. Most commonly used are zinc [Derwand, Shittu] and Azithromycin (AZ) [Guérin]. In vitro experiments report a synergistic effect of HCQ and AZ on antiviral activity [Andreani] at concentrations obtained in the human lung, and in vivo results are consistent with this [Gautret]. Zinc reduces SARS-CoV RNA-dependent RNA polymerase activity in vitro [te Velthuis], however it is difficult to obtain significant intracellular concentrations with zinc alone [Maret]. Combining it with a zinc ionophore such as HCQ increases cellular uptake, making it more likely to achieve effective intracellular concentrations [Xue]. Zinc deficiency varies and inclusion of zinc may be more or less important based on an individual’s existing zinc level. Zinc consumption varies widely based on diet [NIH]. To the extent that the co-administration of zinc, Azithromycin, or other medication or supplements is important, we may underestimate the effectiveness of HCQ because not all countries and locations are using the optimal combination.

Demise of Journalism Is Confirmed

Glenn Greenwald explains in his Intercept article Journalism’s New Propaganda Tool: Using “Confirmed” to Mean its Opposite.  Excerpts in italics with my bolds.

Outlets claiming to have “confirmed” Jeffrey Goldberg’s story about Trump’s troops comments are again abusing that vital term.

The same misleading tactic is now driving the supremely dumb but all-consuming news cycle centered on whether President Trump, as first reported by the Atlantic’s editor-in-chief Jeffrey Goldberg, made disparaging comments about The Troops. Goldberg claims that “four people with firsthand knowledge of the discussion that day” — whom the magazine refuses to name because they fear “angry tweets” — told him that Trump made these comments. Trump, as well as former aides who were present that day (including Sarah Huckabee Sanders and John Bolton), deny that the report is accurate.

So we have anonymous sources making claims on one side, and Trump and former aides (including Bolton, now a harsh Trump critic) insisting that the story is inaccurate. Beyond deciding whether or not to believe Goldberg’s story based on what best advances one’s political interests, how can one resolve the factual dispute? If other media outlets could confirm the original claims from Goldberg, that would obviously be a significant advancement of the story.  Other media outlets — including Associated Press and Fox News — now claim that they did exactly that: “confirmed” the Atlantic story.

But if one looks at what they actually did, at what this “confirmation” consists of, it is the opposite of what that word would mean, or should mean, in any minimally responsible sense.

AP, for instance, merely claims that “a senior Defense Department official with firsthand knowledge of events and a senior U.S. Marine Corps officer who was told about Trump’s comments confirmed some of the remarks to The Associated Press,” while Fox merely said “a former senior Trump administration official who was in France traveling with the president in November 2018 did confirm other details surrounding that trip.”

In other words, all that likely happened is that the same sources who claimed to Jeffrey Goldberg, with no evidence, that Trump said this went to other outlets and repeated the same claims — the same tactic that enabled MSNBC and CBS to claim they had “confirmed” the fundamentally false CNN story about Trump Jr. receiving advanced access to the WikiLeaks archive. Or perhaps it was different sources aligned with those original sources and sharing their agenda who repeated these claims. Given that none of the sources making these claims have the courage to identify themselves, due to their fear of mean tweets, it is impossible to know.

But whatever happened, neither AP nor Fox obtained anything resembling “confirmation.”

They just heard the same assertions that Goldberg heard, likely from the same circles if not the same people, and are now abusing the term “confirmation” to mean “unproven assertions” or “unverifiable claims” (indeed, Fox now says that “two sources who were on the trip in question with Trump refuted the main thesis of The Atlantic’s reporting”).

It should go without saying that none of this means that Trump did not utter these remarks or ones similar to them. He has made public statements in the past that are at least in the same universe as the ones reported by the Atlantic, and it is quite believable that he would have said something like this (though the absolute last person who should be trusted with anything, particularly interpreting claims from anonymous sources, is Jeffrey Goldberg, who has risen to one of the most important perches in journalism despite (or, more accurately because of) one of the most disgraceful and damaging records of spreading disinformation in service of the Pentagon and intelligence community’s agenda).

But journalism is not supposed to be grounded in whether something is “believable” or “seems like it could be true.” Its core purpose, the only thing that really makes it matter or have worth, is reporting what is true, or at least what evidence reveals. And that function is completely subverted when news outlets claim that they “confirmed” a previous report when they did nothing more than just talked to the same people who anonymously whispered the same things to them as were whispered to the original outlet.

Quite aside from this specific story about whether Trump loves The Troops, conflating the crucial journalistic concept of “confirmation” with “hearing the same idle gossip” or “unproven assertions” is a huge disservice. It is an instrument of propaganda, not reporting. And its use has repeatedly deceived rather than informed the public. Anyone who doubts that should review how it is that MSNBC and CBS both claimed to have “confirmed” a CNN report which turned out to be ludicrously and laughably false. Clearly, the term “confirmation” has lost its meaning in journalism.