Why CO2 Can’t Warm the Planet

Figure 1. The global annual mean energy budget of Earth’s climate system (Trenberth and Fasullo, 2012.)

Recently in a discussion thread a warming proponent suggested we read this paper for conclusive evidence. The greenhouse effect and carbon dioxide by Wenyi Zhong and Joanna D. Haigh (2013) Imperial College, London. Indeed as advertised the paper staunchly presents IPCC climate science. Excerpts in italics with my bolds.

IPCC Conception: Earth’s radiation budget and the Greenhouse Effect

The Earth is bathed in radiation from the Sun, which warms the planet and provides all the energy driving the climate system. Some of the solar (shortwave) radiation is reflected back to space by clouds and bright surfaces but much reaches the ground, which warms and emits heat radiation. This infrared (longwave) radiation, however, does not directly escape to space but is largely absorbed by gases and clouds in the atmosphere, which itself warms and emits heat radiation, both out to space and back to the surface. This enhances the solar warming of the Earth producing what has become known as the ‘greenhouse effect’. Global radiative equilibrium is established by the adjustment of atmospheric temperatures such that the flux of heat radiation leaving the planet equals the absorbed solar flux.

The schematic in Figure 1, which is based on available observational data, illustrates the magnitude of these radiation streams. At the Earth’s distance from the Sun the flux of radiant energy is about 1365Wm−2 which, averaged over the globe, amounts to 1365/4 = 341W for each square metre. Of this about 30% is reflected back to space (by bright surfaces such as ice, desert and cloud) leaving 0.7 × 341 = 239Wm−2 available to the climate system. The atmosphere is fairly transparent to short wavelength solar radiation and only 78Wm−2 is absorbed by it, leaving about 161Wm−2 being transmitted to, and absorbed by, the surface. Because of the greenhouse gases and clouds the surface is also warmed by 333Wm−2 of back radiation from the atmosphere. Thus the heat radiation emitted by the surface, about 396Wm−2, is 157Wm−2 greater than the 239Wm−2 leaving the top of the atmosphere (equal to the solar radiation absorbed) – this is a measure of ‘greenhouse trapping’.

Why This Line of Thinking is Wrong and Misleading

Short Answer: Greenhouse Gases Cannot Physically Cause Observed Global Warming. Dr. Peter Langdon Ward explains more fully in the linked text. Excerpts in italics with my bolds.

Key Points:
Thus greenhouse-warming theory and the diagram above is based on these mistaken assumptions:

(1) that radiative energy can be quantified by a single number of watts per square meter,
(2) the assumption that these radiative forcings can be added together, and
(3) the assumption that Earth’s surface temperature is proportional to the sum of all of these radiative forcings.

There are other serious problems:

(4) greenhouse gases absorb only a small part of the radiation emitted by Earth,
(5) they can only reradiate what they absorb,
(6) they do not reradiate in every direction as assumed,
(7) they make up only a tiny part of the gases in the atmosphere, and
(8) they have been shown by experiment not to cause significant warming.
(9) The thermal effects of radiation are not about amount of radiation absorbed, as currently assumed, they are about the temperature of the emitting body and the difference in temperature between the emitting and the absorbing bodies as described below.

Back to the Basics of Radiative Warming in Earth’s Atmosphere

What Physically Is Thermal Radiation?

We physically measure visible light as containing all frequencies of oscillation ranging from 450 to 789 terahertz, where one terahertz is one-trillion cycles per second (10^12 cycles per second). We also observe that the visible spectrum is but a very small part of a much wider continuum that we call electromagnetic radiation.  Electromagnetic continuum with frequencies extending over more than 20 orders of magnitude from extremely low frequency radio signals in cycles per second to microwave, infrared, visible, ultraviolet, X-rays, to gamma rays with frequencies of more than 100 million, million, million cycles per second (10^20 cycles per second).
Thermal radiation is a portion of this continuum of electromagnetic radiation radiated by a body of matter as a result of the body’s temperature—the hotter the body, shown here at the bottom as Temperature, the higher the radiated frequencies of oscillation with significant amplitudes of oscillation.

We observe that electromagnetic radiation has two physical properties: 1) frequency of oscillation, which is color in the visible part of the continuum, and 2) amplitude of oscillation, which we perceive as intensity or brightness at each frequency.  Planck’s law In 1900, Max Planck, one of the fathers of modern physics, derived an equation by trial and error that has become known as Planck’s empirical law. Planck’s empirical law is not based on theory, although several derivations have been proposed. It was formulated solely to calculate correctly the intensities at each frequency observed during extensive direct observations of Nature. Planck’s empirical law calculates the observed intensity or amplitude of oscillation at each frequency of oscillation for radiation emitted by a black body of matter at a specific temperature and at thermal equilibrium. A black body is simply a perfect absorber and emitter of all frequencies of radiation.

Thermal radiation from Earth, at a temperature of 15C, consists of the narrow continuum of frequencies of oscillation shown in green in this plot of Planck’s empirical law. Thermal radiation from the tungsten filament of an incandescent light bulb at 3000C consists of a broader continuum of frequencies shown in yellow and green. Thermal radiation from Sun at 5500C consists of a much broader continuum of frequencies shown in red, yellow and green.

Note in this plot of Planck’s empirical law that the higher the temperature, 1) the broader the continuum of frequencies, 2) the higher the amplitude of oscillation at each and every frequency, and 3) the higher the frequencies of oscillation that are oscillating with the largest amplitudes of oscillation.

Radiation from Sun shown in red, yellow, and green clearly contains much higher frequencies and amplitudes of oscillation than radiation from Earth shown in green. Planck’s empirical law shows unequivocally that the physical properties of radiation are a function of the temperature of the body emitting the radiation.

Heat, defined in concept as that which must be absorbed by solid matter to increase its temperature, is similarly a broad continuum of frequencies of oscillation and corresponding amplitudes of oscillation.

For example, the broad continuum of heat that Earth, with a temperature of 15C, must absorb to reach a temperature of 3000C is shown by the continuum of values within the yellow-shaded area in this plot of Planck’s empirical law.

Heat is, therefore, a broad continuum of frequencies and amplitudes of oscillation that cannot be described by a single number of watts per square meter as currently assumed in physics and in greenhouse-warming theory. The physical properties of heat as described by Planck’s empirical law and the thermal effects of this heat are determined both by the temperature of the emitting body and, as we will see below, by the difference in temperature between the emitting body and the absorbing body.

Greenhouse Gases Limited to Low Energy Frequencies

Figure 1.10 When ozone is depleted, a narrow sliver of solar ultraviolet-B radiation with wavelengths close to 0.31 µm (yellow triangle) reaches Earth. The red circle shows that the energy of this ultraviolet radiation is around 4 electron volts (eV) on the red scale on the right, 48 times the energy absorbed most strongly by carbon dioxide (blue circle, 0.083 eV at 14.9 micrometers (µm) wavelength. Shaded grey areas show the bandwidths of absorption by different greenhouse gases. Current computer models calculate radiative forcing by adding up the areas under the broadened spectral lines that make up these bandwidths. Net radiative energy, however, is proportional to frequency only (red line), not to amplitude, bandwidth, or amount.

Greenhouse gases absorb only certain limited bands of frequencies of radiation emitted by Earth as shown in this diagram. Water is, by far, the strongest absorber, especially at lower frequencies.

Climate models neglect the fact, shown by the red line in Figure 1.10 and explained in
Chapter 4, that due to its higher frequency, ultraviolet radiation (red circle) is
48 times more energy-rich, 48 times “hotter,” than infrared absorbed by
carbon dioxide (blue circle), which means that there is a great deal more energy packed
into that narrow sliver of ultraviolet (yellow triangle) than there is in the broad band
of infrared. This actually makes very good intuitive sense. From personal experience,
we all know that we get very hot and are easily sunburned when standing in ultraviolet
sunlight during the day, but that we have trouble keeping warm at night when standing
in infrared energy rising from Earth.

plancks-law-frequency-title

Ångström (1900) showed that “no more than about 16 percent of earth’s radiation can be absorbed by atmospheric carbon dioxide, and secondly, that the total absorption is very little dependent on the changes in the atmospheric carbon dioxide content, as long as it is not smaller than 0.2 of the existing value.” Extensive modern data agree that carbon dioxide absorbs less than 16% of the frequencies emitted by Earth shown by the vertical black lines of this plot of Planck’s empirical law where frequencies are plotted on a logarithmic x-axis. These vertical black lines show frequencies and relative amplitudes only. Their absolute amplitudes on this plot are arbitrary.

Temperature at Earth’s surface is the result of the broad continuum of oscillations shown in green. Absorbing less than 16% of the frequencies emitted by Earth cannot have much effect on the temperature of anything.

Summary

Greenhouse warming theory depends on at least nine assumptions that appear to be mistaken. Greenhouse warming theory has never been shown to be physically possible by experiment, a cornerstone of the scientific method. Greenhouse warming theory is rapidly becoming the most expensive mistake ever made in the history of science, economically, politically, and environmentally.

Resources: Light Bulbs Disprove Global Warming

CO2, SO2, O3: A journey of Discovery

Footnote: 

This post is about the radiative properties of CO2 severely limiting its potential to cause global warming.  A separate issue is the belief by warmists and some skeptics that humans are the primary cause of CO2 increases in the atmosphere.  I have looked at this and concluded that natural sources and sinks are more likely responsible, as explained in the post What Causes Rising Atmospheric CO2?

A subsequent post updated the analysis of changes in CO2 and Temperatures Data Update Shows Orwellian Climate Science

Covid Vaccine Obsession

Some time ago PM Trudeau floated the idea that pandemic shutdowns can’t be lifted until a vaccine is available.  More recently, the lack of a vaccine is touted in the US as a reason for keeping schools closed and travel restrictions in place.  What is this obsession with a vaccine as the savior whose healing powers we must await while hiding in isolation?  As a previous post reprinted below explains, it is again a case of generals fighting a past war rather than the current one.

But let’s also be attentive to a bait and switch involving shifty use of words.  A vaccine by definition works by training our immune system to recognize and resist a targeted pathogen.  And it’s a long road to perfecting an agent which achieves that without doing harm to some or many individuals.  Meanwhile Bill Gates is promoting something termed a “vaccine” which intends to modify our DNA to defend against SARS CV2.  That is not like smallpox or polio vaccine.  It is more like making humans a genetically modified organism (GMOs).

I have nothing against GMO plant inventions.  As a British princess Anne reminded last month, the world benefited greatly from Canadian researchers who genetically modified rapeseed plants resulting in the nutritionally superior Canola vegetable oil. The “Green Revolution”, involving yellow rice relies on responsible use of GMO.  But we drew the line at cloning humans, and the same goes for tinkering with people’s genetic codes.

[Comment:  I am somewhat reassured by this statement from an article explaining DNA and RNA vaccines:

If there are ethical concerns in genetics, they might apply to techniques like human-gene editing, where a person’s DNA is altered to cut out a gene that might make you prone to a particular cancer. And those alterations can be passed on through generations.  But that’s not the case with DNA vaccines. “They don’t alter a person’s DNA at all. They provide a temporary addition in a small number of cells,” says Gilbert. “DNA vaccines do not enter the genome.”  Source: What’s the science on DNA and RNA vaccines?]

Let the race for real vaccines proceed, but we can’t count on a miracle finish any time soon, and it may even prove impossible for a coronavirus. Don’t forget SARS CV1 went away by itself before a vaccine could be deployed.  If it turns out Sweden took the right strategy, SARS CV2 may also take its place beside other pathogens with which we learn to live.  And in the meantime, many nations around the world are taking care of their citizens, saving lives with the HCQ+ treatment protocol.

Previous Post: The Virus Wars

The proverb is “Generals are always fighting the last war,” and its origin is uncertain. One possibility is a quote from Winston Churchill: “It is a joke in Britain to say that the War Office is always preparing for the last war.” 1948 Winston S. Churchill _The Second World War_ I (Boston: Houghton Mifflin, 1985) 426:

Konrad Lorenz demonstrated how imprinting works upon animal behavior, while military historians have reported how powerfully human social animals are influenced by the past and instilled lessons from others.

Austria – 20th century. Animal behaviourist Konrad Lorenz and mallard goslings

Which brings me to these reflections about the current WuHanFlu outbreak. The chart at the top summarizes our received epidemiological wisdom about the danger of viruses according to the dimensions of deadliness and contagiousness. As the diagram shows, extremely deadly viruses tend to kill their hosts too quickly to be transmitted widely. Conversely, a virus that spreads easily accomplishes that by slowly killing its hosts, perhaps even leaving them alive. The biggest threats are the germs that are lethal, but spread widely because the symptoms are slow to develop (longer incubation period).

Regarding the recent virus wars, consider these four (Source: Big Think. Excerpts in italics with my bolds)

SARS (started in Hong Kong in March 2003),
Swine flu (started in Mexico in March 2009),
Ebola (started in Western Africa in March 2014), and
MERS (started in South Korea in May 2015).

The video below explains the last two impactful wars were against SARS and Swine Flu (HINI).

For the sake of comparison, the graphs for each epidemic are aligned so they all start together on Day One of each outbreak.

At first, Ebola is the scary one. Not only had it infected the most people after just one day, it had killed two thirds of those.

By comparison, SARS killed its first victim only after three days (out of 38 people infected).

By Day 10, SARS had overtaken Ebola as the most infectious of the outbreaks (264 vs. 145 patients), but the latter was ten times more lethal (91 dead from Ebola vs. 9 from SARS). At this time, the coronavirus had infected 39 people, killed none, and was still playing in the same minor league as the swine flu and MERS.

Day 20, and SARS cases are skyrocketing: 1,550 people are ill, 55 have died. That’s a death rate of 3.5%. Ebola has affected only 203 people by now, but killed 61.6% of them, a total of 125. Meanwhile, the coronavirus has taken Ebola’s second place, but is still far behind SARS (284 infected). At this time, the coronavirus has claimed the lives of just five people.

But now the coronavirus cases are exploding; by Day 30, the new virus has infected 7,816 people, killing 204. That’s far more infected than any other virus (SARS comes a distant second with 2,710 patients), and significantly more killed (Ebola, though still just 242 people ill, has killed 147, due to its high fatality rate). Meanwhile, MERS is stuck in triple digits, and the swine flu in double digits.

The swine flu numbers keep growing exponentially: by Day 80, they’ve passed 362,000 cases (and 1,770 deaths), far surpassing any of the other diseases.

Day 100: swine flu cases are approaching 1 million, deaths have surpassed 5,000. That’s far more than all the other diseases combined—they have merged into a single line at the bottom of the graph.

By Day 150, swine flu hit 5.2 million patients, with 25,400 people killed. By the time it was declared over, a year later, the outbreak would eventually have infected more than 60 million people and claimed the lives of almost 300,000.

Swine flu was caused by the H1N1 virus, which also caused the Spanish flu. That outbreak, in 1918/19, infected about 500 million people, or 1 in 3 people alive at that time. It killed at least 50 million people. It was the combination of extreme infectiousness and high fatality that made the Spanish flu such a global, lethal pandemic.

None of the other infectious diseases comes close to that combination. The swine flu, although more infectious than other diseases, was less infectious than the Spanish flu, and also less deadly (0.5%). Unlike COVID-19 or its fellow coronaviruses SARS and MERS, Ebola is not spread via airborne particles, but via contact with infected blood. That makes it hard to spread. Ironically, it may also be too lethal (39.6%) to spread very far. And COVID-19 itself, while relatively lethal (2.4%), is well below the deadliness of the Spanish flu, and does not seem to spread with the same ease.

As that history lesson shows, our pandemic generals have likely been preoccupied with three previous enemies: Spanish Flu, Swine Flu, and SARS. The first one served as the catastrophic defeat to be avoided, H1N1 as the victory achieved by deploying vaccine, and SARS as the coronavirus prototype. Naming the Wuhan virus SARS-CoV-2 (Severe acute respiratory syndrome coronavirus 2) predisposed tacticians and soldiers to fight against a viral pneumonia, and to expect air borne transmission as happened with SARS 1.

The battle plan was drawn up to protect the health care system against the deluge of victims coming to hospitals and ICUs. Flattening the curve of such cases was the strategy, and social distancing and personal immobility was imposed to that end. What has been the effect? For that there is an analysis from John Nolte What Terrible Coronavirus Models Tell Us About Global Warming Models H/T Joe D’Aleo Excerpts in italics with my bolds.

Let’s face it, the coronavirus models are terrible. Not just off, but way, way, way off in their predictions of a doomsday scenario that never arrived.

That’s not to say that over 20,000 dead Americans is not a heartbreaking reality. That’s not even to say that parts of the country should not have been shut down. But come on…

We shut the entire country down using the Institute for Health Metrics and Evaluation (IHME) models, and in doing so put 17 million (and counting) Americans out of work, shattered 17 million (and counting) lives, and… Well, take a look for yourself below.

That gigantic hump is the IHME’s April 1 prediction of coronavirus hospitalizations. The smaller humps way, way, waaaay below that are the IMHE’s predictions of coronavirus hospitalizations after they were revised just a few days later on April 5, 7, and 9.

The green line is the true number of hospitalizations, starting with the whole U.S., and into the states.

So why does this matter? And why are we looking at hospitalizations?

Well, remember, the whole reason for shutting down the economy was to ensure our healthcare system was not overloaded. And it should be noted that these expert models are based on full mitigation, based on what did indeed happen, which was basically a full shutdown of the economy by way of a lockdown. And these models are still horribly, terribly wrong.

Even if you believe the correct decision was made, that does not change how wildly wrong the coronavirus models were, are, and will almost certainly continue to be. That does not change the fact we shut down our entire economy based on incredibly flawed models.

Now I realize that the people who did the terribly flawed coronavirus models are not the same people who do the modeling for global cooling global warming climate change or whatever the hell these proven frauds are calling it today. But hear me out…

We’re still talking about “experts” our media and government grovel down to without question.

We’re still talking about models with the goal of destroying our way of life, our prosperity, our standard of living, and our individual freedoms to live our lives in whatever way we choose

We’re still talking about models with the goal of handing a tremendously scary amount of authority and power to a centralized government.

The coronavirus modeling was based on something real, on something happening at the time. The experts doing the coronavirus models had all kinds of data on which to make their assumptions. Not just reams and reams of scientific data based on previous pandemics, viruses, and human behavior; but also real-time data on the coronavirus itself from China, Italy, and other countries… And they still blew it. They still got it horribly wrong.

A health worker in protective gear waits to hand out self-testing kits in a parking lot of Rose Bowl Stadium in Pasadena, Calif., during the coronavirus outbreak, April 8, 2020. (Mario Anzuoni/Reuters)

What Went Wrong? California Provides a Clue

As the diagram at the top shows, WuHanFlu looked like an especially dangerous mix of deadly contagion. Thus California with its large population and extensive contact with China should be the US viral hot spot, and yet it isn’t. Maybe the contagion is real but the effects are milder than imagined.
Victor Davis Hanson writes at National Review Yes, California Remains Mysterious — Despite the Weaponization of the Debate. Excerpts in italics with my bolds.

How Many People Already Have COVID-19?

California is touchy, and yet still remains confused, about incomplete data showing that the 40-million-person state, as of Sunday, April 12, reportedly had 23,777 cases of residents who have tested posted for the COVID-19 illness. The number of infected by the 12th includes 674 deaths, resulting in a fatality rate of about 17 deaths per million of population. That is among the lowest rates of the larger American states (Texas has 10 deaths per million), and lower than almost all major European countries, (about half of Germany’s 36 deaths per million).

No doubt there are lots of questionable data in all such metrics. As a large state California has not been especially impressive in a per capita sense in testing its population (about 200,000 tests so far). Few of course believe that the denominator of cases based on test results represent the real number of those who have been or are infected.

There is the now another old debate over exactly how the U.S. defines death by the virus versus death because of the contributing factors of the virus to existing medical issues. Certainly, the methodology of coronavirus modeling is quite different from that of, say, the flu. The denominator of flu cases is almost always a modeled approximation, not a misleadingly precise number taken from only those who go to their doctors or emergency rooms and test positive for an influenza strain. And the numerator of deaths from the flu may be calibrated somewhat more conservatively than those currently listed as deaths from the coronavirus.

Nonetheless, the state’s population is fairly certain. And for now, the number of deaths by the virus is the least controversial of many of these data, suggesting that deaths per million of population might be a useful comparative number.

As I wrote in a recent NRO piece, the state on the eve of the epidemic seemed especially vulnerable given the large influx of visitors from China on direct flights to its major airports all fall and early winter until the January 31 ban (and sometime after). It ranks rather low in state comparisons of hospital beds, physicians, and nurses per capita. It suffers high rates of poverty, wide prevalence of state assistance, and medical challenges such as widespread diabetes.

This IHME projection is current as of April 14, at 12 p.m. ET, and will be updated periodically as the modelers input new data. The visualization shows the day each state may reach its peak between now and Aug. 4. The projected peak is when a state’s curve begins to show a consistent trend downward. To enlarge open image in new tab.  Source: NPR

Certainly, both then and more recently, there have been a number of anecdotal accounts, media stories, and small isolated studies suggesting that more people than once thought, both here and abroad, have been infected with the virus and developed immunity, that the virus may have reached the West and the U.S. earlier than once or currently admitted by Chinese researchers — so, inter alia, California in theory could weather the epidemic with much less death and illness than earlier models of an eventual 25.5 million infected had suggested. Since then, a number of models, including Governor Newsom’s projection of 25.5 million infected Californians over an eight-week period, have been questioned. Controversy exists over exactly why models are being recalibrated downward. One explanation is that the shelter-in-space orders have been more successful than expected; others point to various flawed modeling assumptions.

Front-line physicians who see sick patients do not necessarily agree with researchers in the lab. For example, a Los Angeles Times story was widely picked up by other news outlets that quoted Dr. Jeff Smith, the chief executive of Santa Clara County. Smith reportedly now believes that the virus arrived in California much earlier than often cited, at least in early 2020:

The severity of flu season made health care professionals think that patients were suffering from influenza given the similarity of some of the symptoms. In reality, however, a handful of sick Californians that were going to the doctor earlier this year may have been among the first to be carrying the coronavirus. “The virus was freewheeling in our community and probably has been here for quite some time,” Smith, a physician, told county leaders in a recent briefing. The failure of authorities to detect the virus earlier has allowed it to spread unchecked in California and across the nation. “This wasn’t recognized because we were having a severe flu season. . . . Symptoms are very much like the flu. If you got a mild case of COVID, you didn’t really notice. You didn’t even go to the doctor. . . . The doctor maybe didn’t even do it because they presumed it was the flu.”

Footnote:  See also Good Virus News from the Promised Land

How bad is covid really? (A Swedish doctor’s POV)

This is a reblog of the post at Sebastian Rushworth M.D. Health and medical information grounded in science.  Excerpts in italics with my bolds.

Ok, I want to preface this article by stating that it is entirely anecdotal and based on my experience working as a doctor in the emergency room of one of the big hospitals in Stockholm, Sweden, and of living as a citizen in Sweden. As many people know, Sweden is perhaps the country that has taken the most relaxed attitude of any towards the covid pandemic. Unlike other countries, Sweden never went in to complete lockdown. Non-essential businesses have remained open, people have continued to go to cafés and restaurants, children have remained in school, and very few people have bothered with face masks in public.

Covid hit Stockholm like a storm in mid-March. One day I was seeing people with appendicitis and kidney stones, the usual things you see in the emergency room. The next day all those patients were gone and the only thing coming in to the hospital was covid. Practically everyone who was tested had covid, regardless of what the presenting symptom was. People came in with a nose bleed and they had covid. They came in with stomach pain and they had covid.

Then, after a few months, all the covid patients disappeared. It is now four months since the start of the pandemic, and I haven’t seen a single covid patient in over a month. When I do test someone because they have a cough or a fever, the test invariably comes back negative. At the peak three months back, a hundred people were dying a day of covid in Sweden, a country with a population of ten million. We are now down to around five people dying per day in the whole country, and that number continues to drop. Since people generally die around three weeks after infection, that means virtually no-one is getting infected any more. If we assume around 0.5 percent of those infected die (which I think is very generous, more on that later), then that means that three weeks back 1,000 people were getting infected per day in the whole country, which works out to a daily risk per person of getting infected of 1 in 10,000, which is miniscule. And remember, the risk of dying is at the very most 1 in 200 if you actually do get infected. And that was three weeks ago.

Basically, covid is in all practical senses over and done with in Sweden. After four months.

In total covid has killed under 6,000 people in a country of ten million. A country with an annual death rate of around 100,000 people. Considering that 70% of those who have died of covid are over 80 years old, quite a few of those 6,000 would have died this year anyway. That makes covid a mere blip in terms of its effect on mortality.

That is why it is nonsensical to compare covid to other major pandemics, like the 1918 pandemic that killed tens of millions of people. Covid will never even come close to those numbers. And yet many countries have shut down their entire economies, stopped children going to school, and made large portions of their population unemployed in order to deal with this disease.

The media have been proclaiming that only a small percentage of the population have antibodies, and therefore it is impossible that herd immunity has developed. Well, if herd immunity hasn’t developed, where are all the sick people? Why has the rate of infection dropped so precipitously? Considering that most people in Sweden are leading their lives normally now, not socially distancing, not wearing masks, there should still be high rates of infection.

The reason we test for antibodies is because it is easy and cheap. Antibodies are in fact not the body’s main defence against virus infections. T-cells are. But T-cells are harder to measure than antibodies, so we don’t really do it clinically. It is quite possible to have T-cells that are specific for covid and thereby make you immune to the disease, without having any antibodies. Personally, I think this is what has happened. Everybody who works in the emergency room where I work has had the antibody test. Very few actually have antibodies. This is in spite of being exposed to huge numbers of infected people, including at the beginning of the pandemic, before we realized how widespread covid was, when no-one was wearing protective equipment.

I am not denying that covid is awful for the people who do get really sick or for the families of the people who die, just as it is awful for the families of people who die of cancer, or influenza, or an opioid overdose.

But the size of the response in most of the world (not including Sweden) has been totally disproportionate to the size of the threat.

Sweden ripped the metaphorical band-aid off quickly and got the epidemic over and done with in a short amount of time, while the rest of the world has chosen to try to peel the band-aid off slowly. At present that means Sweden has one of the highest total death rates in the world. But covid is over in Sweden. People have gone back to their normal lives and barely anyone is getting infected any more. I am willing to bet that the countries that have shut down completely will see rates spike when they open up. If that is the case, then there won’t have been any point in shutting down in the first place, because all those countries are going to end up with the same number of dead at the end of the day anyway. Shutting down completely in order to decrease the total number of deaths only makes sense if you are willing to stay shut down until a vaccine is available. That could take years. No country is willing to wait that long.

Covid has at present killed less than 6000 in Sweden. It is very unlikely that the number of dead will go above 7,000. An average influenza year in Sweden, 700 people die of influenza. Does that mean covid is ten times worse than influenza? No, because influenza has been around for centuries while covid is completely new. In an average influenza year most people already have some level of immunity because they’ve been infected with a similar strain previously, or because they’re vaccinated. So it is quite possible, in fact likely, that the case fatality rate for covid is the same as for influenza, or only slightly higher, and the entire difference we have seen is due to the complete lack of any immunity in the population at the start of this pandemic.

This conclusion makes sense of the Swedish fatality numbers – if we’ve reached a point where there is hardly any active infection going on any more in Sweden in spite of the fact that there is barely any social distancing happening then that means at least 50% of the population has been infected already and have developed immunity, which is five million people. This number is perfectly reasonable if we assume a reproductive number for the virus of two: If each person infects two new, with a five day period between being infected and infecting others, and you start out with just one infected person in the country, then you will reach a point where several million are infected in just four months.

If only 6000 are dead out of five million infected, that works out to a case fatality rate of 0.12 percent, roughly the same as regular old influenza, which no-one is the least bit frightened of, and for which we don’t shut down our societies.

 

 

Covid Recedes in Canada August

The map shows that in Canada 8979 deaths have been attributed to Covid19, meaning people who died having tested positive for SARS CV2 virus.  This number accumulated over a period of 204 days starting January 31. The daily death rate reached a peak of 177 on May 6, 2020, and is down to 5 as of yesterday.  More details on this below, but first the summary picture. (Note: 2019 is the latest demographic report)

Canada Pop Ann Deaths Daily Deaths Risk per
Person
2019 37589262 330786 906 0.8800%
Covid 2020 37589262 8979 44 0.0239%

Over the epidemic months, the average Covid daily death rate amounted to 5% of the All Causes death rate. During this time a Canadian had an average risk of 1 in 5000 of dying with SARS CV2 versus a 1 in 114 chance of dying regardless of that infection. As shown later below the risk varied greatly with age, much lower for younger, healthier people.

Background Updated from Previous Post

In reporting on Covid19 pandemic, governments have provided information intended to frighten the public into compliance with orders constraining freedom of movement and activity. For example, the above map of the Canadian experience is all cumulative, and the curve will continue upward as long as cases can be found and deaths attributed.  As shown below, we can work around this myopia by calculating the daily differentials, and then averaging newly reported cases and deaths by seven days to smooth out lumps in the data processing by institutions.

A second major deficiency is lack of reporting of recoveries, including people infected and not requiring hospitalization or, in many cases, without professional diagnosis or treatment. The only recoveries presently to be found are limited statistics on patients released from hospital. The only way to get at the scale of recoveries is to subtract deaths from cases, considering survivors to be in recovery or cured. Comparing such numbers involves the delay between infection, symptoms and death. Herein lies another issue of terminology: a positive test for the SARS CV2 virus is reported as a case of the disease COVID19. In fact, an unknown number of people have been infected without symptoms, and many with very mild discomfort.

August 7 in the UK it was reported (here) that around 10% of coronavirus deaths recorded in England – almost 4,200 – could be wiped from official records due to an error in counting.  Last month, Health Secretary Matt Hancock ordered a review into the way the daily death count was calculated in England citing a possible ‘statistical flaw’.  Academics found that Public Health England’s statistics included everyone who had died after testing positive – even if the death occurred naturally or in a freak accident, and after the person had recovered from the virus.  Numbers will now be reconfigured, counting deaths if a person died within 28 days of testing positive much like Scotland and Northern Ireland…

Professor Heneghan, director of the Centre for Evidence-Based Medicine at Oxford University, who first noticed the error, told the Sun: ‘It is a sensible decision. There is no point attributing deaths to Covid-19 28 days after infection…

For this discussion let’s assume that anyone reported as dying from COVD19 tested positive for the virus at some point prior. From the reasoning above let us assume that 28 days after testing positive for the virus, survivors can be considered recoveries.

Recoveries are calculated as cases minus deaths with a lag of 28 days. Daily cases and deaths are averages of the seven days ending on the stated date. Recoveries are # of cases from 28 days earlier minus # of daily deaths on the stated date. Since both testing and reports of Covid deaths were sketchy in the beginning, this graph begins with daily deaths as of April 24, 2020 compared to cases reported on March 27, 2020.

The line shows the Positivity metric for Canada starting at nearly 8% for new cases April 24, 2020. That is, for the 7 day period ending April 24, there were a daily average of 21,772 tests and 1715 new cases reported. Since then the rate of new cases has dropped down, now holding steady at ~1% since mid-June. Yesterday, the daily average number of tests was 43,612 with 375 new cases. So despite double the testing, the positivity rate is not climbing.  Another view of the data is shown below.

The scale of testing has increased and has now reached nearly 50,000 a day, while positive tests (cases) are hovering at 1% positivity.  The shape of the recovery curve resembles the case curve lagged by 28 days, since death rates are a small portion of cases.  The recovery rate has grown from 83% to 98% steady over the last 2 weeks.  This approximation surely understates the number of those infected with SAR CV2 who are healthy afterwards, since antibody studies show infection rates multiples higher than confirmed positive tests (8 times higher in Canada).  In absolute terms, cases are now down to 375 a day and deaths 5 a day, while estimates of recoveries are 285 a day.

Summary of Canada Covid Epidemic

It took a lot of work, but I was able to produce something akin to the Dutch advice to their citizens.

The media and governmental reports focus on total accumulated numbers which are big enough to scare people to do as they are told.  In the absence of contextual comparisons, citizens have difficulty answering the main (perhaps only) question on their minds:  What are my chances of catching Covid19 and dying from it?

A previous post reported that the Netherlands parliament was provided with the type of guidance everyone wants to see.

For canadians, the most similar analysis is this one from the Daily Epidemiology Update: :

The table presents only those cases with a full clinical documentation, which included some 2194 deaths compared to the 5842 total reported.  The numbers show that under 60 years old, few adults and almost no children have anything to fear.

Update May 20, 2020

It is really quite difficult to find cases and deaths broken down by age groups.  For Canadian national statistics, I resorted to a report from Ontario to get the age distributions, since that province provides 69% of the cases outside of Quebec and 87% of the deaths.  Applying those proportions across Canada results in this table. For Canada as a whole nation:

Age  Risk of Test +  Risk of Death Population
per 1 CV death
<20 0.05% None NA
20-39 0.20% 0.000% 431817
40-59 0.25% 0.002% 42273
60-79 0.20% 0.020% 4984
80+ 0.76% 0.251% 398

In the worst case, if you are a Canadian aged more than 80 years, you have a 1 in 400 chance of dying from Covid19.  If you are 60 to 80 years old, your odds are 1 in 5000.  Younger than that, it’s only slightly higher than winning (or in this case, losing the lottery).

As noted above Quebec provides the bulk of cases and deaths in Canada, and also reports age distribution more precisely,  The numbers in the table below show risks for Quebecers.

Age  Risk of Test +  Risk of Death Population
per 1 CV death
0-9 yrs 0.13% 0 NA
10-19 yrs 0.21% 0 NA
20-29 yrs 0.50% 0.000% 289,647
30-39 0.51% 0.001% 152,009
40-49 years 0.63% 0.001% 73,342
50-59 years 0.53% 0.005% 21,087
60-69 years 0.37% 0.021% 4,778
70-79 years 0.52% 0.094% 1,069
80-89 1.78% 0.469% 213
90  + 5.19% 1.608% 62

While some of the risk factors are higher in the viral hotspot of Quebec, it is still the case that under 80 years of age, your chances of dying from Covid 19 are better than 1 in 1000, and much better the younger you are.

Heisenberg Uncertainty Appears in Socio-Political Research

Background:  Heisenberg Uncertainty

In the sub-atomic domain of quantum mechanics, Werner Heisenberg, a German physicist, determined that our observations have an effect on the behavior of quanta (quantum particles).

The Heisenberg uncertainty principle states that it is impossible to know simultaneously the exact position and momentum of a particle. That is, the more exactly the position is determined, the less known the momentum, and vice versa. This principle is not a statement about the limits of technology, but a fundamental limit on what can be known about a particle at any given moment. This uncertainty arises because the act of measuring affects the object being measured. The only way to measure the position of something is using light, but, on the sub-atomic scale, the interaction of the light with the object inevitably changes the object’s position and its direction of travel.

Now skip to the world of governance and the effects of regulation. A similar finding shows that the act of regulating produces reactive behavior and unintended consequences contrary to the desired outcomes. More on that later on from a previous post.

This article looks at political and social research attempts to describe the electorate’s preoccupations and preferences ahead of 2020 US Presidential voting in November.

John McLaughlin explains in his article Biased Polls Suppress Vote  Excerpts in italics with my bolds.

McLaughlin noted among the 220 million eligible voters in the U.S., only around 139 million voted in 2016, which is considered the most all-time.

“Even if it goes up to 140-150 million, the polls of adults are going to be skewed against Republicans,” McLaughlin told Monday’s “Greg Kelly Reports,” especially “since President Trump gets over 90% support from Republicans.”

McLaughlin noted CNN’s poll among adults featured just 25% registered Republicans, where as around one-third of the electorate that voted in 2016 were Republicans.

He added to host Greg Kelly, it costs more to run focused polls of likely voters from actual voter registration lists.

“It’s cheaper for them to do,” in addition to being advantageous to the Democratic candidate, McLaughlin told Kelly. “They don’t have to buy a sample of voters, that campaign pollsters – whether Republican or Democrat – are going to have to do.”

Also, per McLaughlin, reporting a blowout lead ultimately can cause voter suppression, a frequent rally cry of Democrats against Republicans in election.

Politico notes that there is nothing nefarious going on to skew these polls toward Biden. But they do have the same issue the 2016 polls had: They’re not reaching all of the Trump supporters.

At the center of the issue are white voters without college degrees; in 2016, Trump earned 67% of this demographic’s support, while Democrat Hillary Clinton got just 28%. Current polls, according to Politico, are not capturing enough of this voting bloc, which unintentionally skews the results toward Biden.

My Comment:  This post was inspired by a Flynnville Train song that captures the sentiment of working class Americans alienated from the political process.  Disrespected as “deplorables” they turned out for Trump and made the difference in 2016.  Now with arbitrary pandemic restrictions and random urban rioting, these folks are even more incensed about the political elite.  Lest anyone think them inconsequential, remember that many of them get up and go to watch the most popular US spectator sport.  I refer to stock car racing, not the kneeling football or basketball athletes.

Lyrics:

IF YOU’RE HANDS ARE HURTIN’ FROM A WEEK OF WORKIN’
AND HOLDING YOUR WOMAN IS THE ONLY THING THEY’RE GOOD FOR
YOU’RE PREACHING TO THE CHOIR
IF THE PRICE OF GAS IS BREAKIN’ YOUR BACK
AND THAT DRIVE TO WORK IS KILLIN’ YOUR PAYCHECK
YOU’RE PREACHING TO THE CHOIR
IF YOU’RE WORRIED ‘BOUT WHERE THIS COUNTRY’S HEADED
AND YOU DON’T BELIEVE ONE POLITICIAN GETS IT

CHORUS
YOURE PREACHIN’ TO THE CHOIR
A FELLOW WORKIN’ MAN
THERE’S A WHOLE LOT OF STUFF MESSED UP
CAN I GETTA AMEN
SOMETHING’S GOTTA GIVE
CAUSE WE’RE ALL GETTING TIRED
SO GO ON BITCH AND MOAN
YOU’RE PREACHING TO THE CHOIR

IF THE GOOD BOOK SITS BESIDE YOUR BED
AND UNDER YOUR ROOF WE’RE STILL ONE NATION UNDER GOD
YOU’RE PREACHING TO THE CHOIR
IF YOU LIKE THE CHANCE TO WRAP YOUR HANDS
ROUND THAT S.O.B. THAT HURT THAT KID ON THE EVENIN’ NEWS
YOU’RE PREACHING TO THE CHOIR
IF YOU KNOW THERE AIN’T NO HERO LIKE A SOLDIER
BUT YOU HATE TO EVER HAVE TO SEND ‘EM OVER

CHORUS

IF THE GOLDEN RULE STILL MEANS SOMETHING TO YA
WELL HALLELUJAH

CHORUS

PREACHING TO THE CHOIR

Previous Post: Regulatory Backfire

An article at Financial Times explains about Energy Regulations Unintended Consequences  Excerpts below with my bolds.

Goodhart’s Law holds that “any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”. Originally coined by the economist Charles Goodhart as a critique of the use of money supply measures to guide monetary policy, it has been adopted as a useful concept in many other fields. The general principle is that when any measure is used as a target for policy, it becomes unreliable. It is an observable phenomenon in healthcare, in financial regulation and, it seems, in energy efficiency standards.

When governments set efficiency regulations such as the US Corporate Average Fuel Economy standards for vehicles, they are often what is called “attribute-based”, meaning that the rules take other characteristics into consideration when determining compliance. The Cafe standards, for example, vary according to the “footprint” of the vehicle: the area enclosed by its wheels. In Japan, fuel economy standards are weight-based. Like all regulations, fuel economy standards create incentives to game the system, and where attributes are important, that can mean finding ways to exploit the variations in requirements. There have long been suspicions that the footprint-based Cafe standards would encourage manufacturers to make larger cars for the US market, but a paper this week from Koichiro Ito of the University of Chicago and James Sallee of the University of California Berkeley provided the strongest evidence yet that those fears are likely to be justified.

Mr Ito and Mr Sallee looked at Japan’s experience with weight-based fuel economy standards, which changed in 2009, and concluded that “the Japanese car market has experienced a notable increase in weight in response to attribute-based regulation”. In the US, the Cafe standards create a similar pressure, but expressed in terms of size rather than weight. Mr Ito suggested that in Ford’s decision to end almost all car production in North America to focus on SUVs and trucks, “policy plays a substantial role”. It is not just that manufacturers are focusing on larger models; specific models are also getting bigger. Ford’s move, Mr Ito wrote, should be seen as an “alarm bell” warning of the flaws in the Cafe system. He suggests an alternative framework with a uniform standard and tradeable credits, as a more effective and lower-cost option. With the Trump administration now reviewing fuel economy and emissions standards, and facing challenges from California and many other states, the vehicle manufacturers appear to be in a state of confusion. An elegant idea for preserving plans for improving fuel economy while reducing the cost of compliance could be very welcome.

The paper is The Economics of Attribute-Based Regulation: Theory and Evidence from Fuel-Economy Standards Koichiro Ito, James M. Sallee NBER Working Paper No. 20500.  The authors explain:

An attribute-based regulation is a regulation that aims to change one characteristic of a product related to the externality (the “targeted characteristic”), but which takes some other characteristic (the “secondary attribute”) into consideration when determining compliance. For example, Corporate Average Fuel Economy (CAFE) standards in the United States recently adopted attribute-basing. Figure 1 shows that the new policy mandates a fuel-economy target that is a downward-sloping function of vehicle “footprint”—the square area trapped by a rectangle drawn to connect the vehicle’s tires.  Under this schedule, firms that make larger vehicles are allowed to have lower fuel economy. This has the potential benefit of harmonizing marginal costs of regulatory compliance across firms, but it also creates a distortionary incentive for automakers to manipulate vehicle footprint.

Attribute-basing is used in a variety of important economic policies. Fuel-economy regulations are attribute-based in China, Europe, Japan and the United States, which are the world’s four largest car markets. Energy efficiency standards for appliances, which allow larger products to consume more energy, are attribute-based all over the world. Regulations such as the Clean Air Act, the Family Medical Leave Act, and the Affordable Care Act are attribute-based because they exempt some firms based on size. In all of these examples, attribute-basing is designed to provide a weaker regulation for products or firms that will find compliance more difficult.

Summary from Heritage Foundation study Fuel Economy Standards Are a Costly Mistake Excerpt with my bolds.

The CAFE standards are not only an extremely inefficient way to reduce carbon dioxide emission but will also have a variety of unintended consequences.

For example, the post-2010 standards apply lower mileage requirements to vehicles with larger footprints. Thus, Whitefoot and Skerlos argued that there is an incentive to increase the size of vehicles.

Data from the first few years under the new standard confirm that the average footprint, weight, and horsepower of cars and trucks have indeed all increased since 2008, even as carbon emissions fell, reflecting the distorted incentives.

Manufacturers have found work-arounds to thwart the intent of the regulations. For example, the standards raised the price of large cars, such as station wagons, relative to light trucks. As a result, automakers created a new type of light truck—the sport utility vehicle (SUV)—which was covered by the lower standard and had low gas mileage but met consumers’ needs. Other automakers have simply chosen to miss the thresholds and pay fines on a sliding scale.

Another well-known flaw in CAFE standards is the “rebound effect.” When consumers are forced to buy more fuel-efficient vehicles, the cost per mile falls (since their cars use less gas) and they drive more. This offsets part of the fuel economy gain and adds congestion and road repair costs. Similarly, the rising price of new vehicles causes consumers to delay upgrades, leaving older vehicles on the road longer.

In addition, the higher purchase price of cars under a stricter CAFE standard is likely to force millions of households out of the new-car market altogether. Many households face credit constraints when borrowing money to purchase a car. David Wagner, Paulina Nusinovich, and Esteban Plaza-Jennings used Bureau of Labor Statistics data and typical finance industry debt-service-to-income ratios and estimated that 3.1 million to 14.9 million households would not have enough credit to purchase a new car under the 2025 CAFE standards.[34] This impact would fall disproportionately on poorer households and force the use of older cars with higher maintenance costs and with fuel economy that is generally lower than that of new cars.

CAFE standards may also have redistributed corporate profits to foreign automakers and away from Ford, General Motors (GM), and Chrysler (the Big Three), because foreign-headquartered firms tend to specialize in vehicles that are favored under the new standards.[35] 

Conclusion

CAFE standards are costly, inefficient, and ineffective regulations. They severely limit consumers’ ability to make their own choices concerning safety, comfort, affordability, and efficiency. Originally based on the belief that consumers undervalued fuel economy, the standards have morphed into climate control mandates. Under any justification, regulation gives the desires of government regulators precedence over those of the Americans who actually pay for the cars. Since the regulators undervalue the well-being of American consumers, the policy outcomes are predictably harmful.

 

 

 

 

Cool July for Land and Ocean Air Temps

banner-blogWith apologies to Paul Revere, this post is on the lookout for cooler weather with an eye on both the Land and the Sea.  UAH has updated their tlt (temperatures in lower troposphere) dataset for July 2020.  Previously I have done posts on their reading of ocean air temps as a prelude to updated records from HADSST3. This month also has a separate graph of land air temps because the comparisons and contrasts are interesting as we contemplate possible cooling in coming months and years.

 

Presently sea surface temperatures (SST) are the best available indicator of heat content gained or lost from earth’s climate system.  Enthalpy is the thermodynamic term for total heat content in a system, and humidity differences in air parcels affect enthalpy.  Measuring water temperature directly avoids distorted impressions from air measurements.  In addition, ocean covers 71% of the planet surface and thus dominates surface temperature estimates.  Eventually we will likely have reliable means of recording water temperatures at depth.

Recently, Dr. Ole Humlum reported from his research that air temperatures lag 2-3 months behind changes in SST.  He also observed that changes in CO2 atmospheric concentrations lag behind SST by 11-12 months.  This latter point is addressed in a previous post Who to Blame for Rising CO2?

HadSST3 results were delayed with February and March updates only appearing together end of April.  For comparison we can look at lower troposphere temperatures (TLT) from UAHv6 which are now posted for July. The temperature record is derived from microwave sounding units (MSU) on board satellites like the one pictured above.

The UAH dataset includes temperature results for air above the oceans, and thus should be most comparable to the SSTs. There is the additional feature that ocean air temps avoid Urban Heat Islands (UHI). In 2015 there was a change in UAH processing of satellite drift corrections, including dropping one platform which can no longer be corrected. The graphs below are taken from the latest and current dataset, Version 6.0.

The graph above shows monthly anomalies for ocean temps since January 2015. After all regions peaked with the El Nino in early 2016, the ocean air temps dropped back down with all regions showing the same low anomaly August 2018.  Then a warming phase ensued with NH and Tropics spikes in February and May 2020. As was the case in 2015-16, the warming was driven by the Tropics and NH, with SH lagging behind. After the up and down fluxes, oceans temps in June returned to a neutral point, close to the 0.4C average for the period. NH rose only slightly in July and was offset by a drop in SH, reducing the chance of another NH or Tropics warming bump this summer.

Land Air Temperatures Showing a Seesaw Pattern

We sometimes overlook that in climate temperature records, while the oceans are measured directly with SSTs, land temps are measured only indirectly.  The land temperature records at surface stations sample air temps at 2 meters above ground.  UAH gives tlt anomalies for air over land separately from ocean air temps.  The graph updated for July 2020 is below.

Here we see evidence of the greater volatility of the Land temperatures, along with extraordinary departures, first by NH land with SH often offsetting.   The overall pattern is similar to the ocean air temps, but obviously driven by NH with its greater amount of land surface. The Tropics synchronized with NH for the 2016 event, but otherwise follow a contrary rhythm.  SH seems to vary wildly, especially in recent months.  Note the extremely high anomaly last November, cold in March 2020, and then again a spike in April. In June 2020, all land regions converged, erasing the earlier spikes in NH and SH, and showing anomalies comparable to the 0.5C average land anomaly this period.

In July land air temps were the reverse of ocean air temps.  SH land temps bumped up, while NH and Tropics declined, giving the same flat result from the prior month.

The longer term picture from UAH is a return to the mean for the period starting with 1995.  2019 average rose but currently lacks any El Nino or NH warm blob to sustain it.

These charts demonstrate that underneath the averages, warming and cooling is diverse and constantly changing, contrary to the notion of a global climate that can be fixed at some favorable temperature.

TLTs include mixing above the oceans and probably some influence from nearby more volatile land temps.  Clearly NH and Global land temps have been dropping in a seesaw pattern, NH in July more than 1C lower than the 2016 peak.  TLT measures started the recent cooling later than SSTs from HadSST3, but are now showing the same pattern.  It seems obvious that despite the three El Ninos, their warming has not persisted, and without them it would probably have cooled since 1995.  Of course, the future has not yet been written.

Data Update Shows Orwellian Climate Science

Climate science is unsettling because past data are not fixed, but change later on.  I ran into this when I set out to update an analysis done in 2014 by Jeremy Shiers, which I discussed in a previous post reprinted at the end.  Jeremy provided a spreadsheet in his essay Murray Salby Showed CO2 Follows Temperature Now You Can Too posted in January 2014. I downloaded his spreadsheet intending to bring the analysis up to the present to see if the results hold up.  The two sources of data were:

Temperature anomalies from RSS here:  http://www.remss.com/missions/amsu

CO2 monthly levels from NOAA (Mauna Loa): https://www.esrl.noaa.gov/gmd/ccgg/trends/data.html

Uploading the CO2 dataset showed that many numbers had changed (why?).

The blue line shows annual observed differences in monthly values year over year, e.g. June 2020 minus June 2019 etc.  The first 12 months (1979) provide the observed starting values from which differentials are calculated.  The orange line shows those differentials changed slightly in the 2020 dataset vs. the 2014 dataset, on average +0.035 ppm.  But there is no pattern or trend added, and deviations vary randomly between + and -.  So I took the current dataset to replace the older one for updating the analysis.

The other time series is the record of global temperature anomalies according to RSS. The current RSS dataset is not at all the same as the past.

To enlarge open image in new tab.

Here we see some seriously unsettling science at work.  The gold line is 2020 RSS and the purple is RSS as of 2014.  The red line shows alterations from the old to the new.  There is a slight cooling of the data in the beginning years, then the two versions pretty much match until 1997, when systematic warming enters the record.  From 1997/5 to 2003/12 the average anomaly increases by 0.04C.  After 2004/1 to 2012/8 the average increase is 0.15C.  At the end from 2012/9 to 2013/12, the average anomaly was higher by 0.21.

RSS continues that accelerated warming to the present, but it cannot be trusted.  And who knows what the numbers will be a few years down the line?  As Dr. Ole Humlum said some years ago (regarding Gistemp): “It should however be noted, that a temperature record which keeps on changing the past hardly can qualify as being correct.”

Given the above manipulations, I went instead to the other satellite dataset UAH version 6. Here are UAH temperature anomalies compared to CO2 changes.

The changes in monthly CO2 synchronize with temperature fluctuations, which for UAH are anomalies referenced to the 1981-2010 period.  The final proof that CO2 follows temperature due to stimulation of natural CO2 reservoirs is demonstrated by the ability to calculate CO2 levels since 1979 with a simple mathematical formula:

For each subsequent year, the co2 level for each month was generated

CO2  this month this year = a + b × Temp this month this year  + CO2 this month last year

Jeremy used Python to estimate a and b, but I used his spreadsheet to guess values that place for comparison the observed and calculated CO2 levels on top of each other.

In the chart calculated CO2 levels correlate with observed CO2 levels at 0.9988 out of 1.0000.  This mathematical generation of CO2 atmospheric levels is only possible if they are driven by temperature-dependent natural sources, and not by human emissions which are small in comparison, rise steadily and monotonically.

Previous Post:  What Causes Rising Atmospheric CO2?

nasa_carbon_cycle_2008-1

This post is prompted by a recent exchange with those reasserting the “consensus” view attributing all additional atmospheric CO2 to humans burning fossil fuels.

The IPCC doctrine which has long been promoted goes as follows. We have a number over here for monthly fossil fuel CO2 emissions, and a number over there for monthly atmospheric CO2. We don’t have good numbers for the rest of it-oceans, soils, biosphere–though rough estimates are orders of magnitude higher, dwarfing human CO2.  So we ignore nature and assume it is always a sink, explaining the difference between the two numbers we do have. Easy peasy, science settled.

What about the fact that nature continues to absorb about half of human emissions, even while FF CO2 increased by 60% over the last 2 decades? What about the fact that so far in 2020 FF CO2 has declined significantly with no discernable impact on rising atmospheric CO2?

These and other issues are raised by Murray Salby and others who conclude that it is not that simple, and the science is not settled. And so these dissenters must be cancelled lest the narrative be weakened.

The non-IPCC paradigm is that atmospheric CO2 levels are a function of two very different fluxes. FF CO2 changes rapidly and increases steadily, while Natural CO2 changes slowly over time, and fluctuates up and down from temperature changes. The implications are that human CO2 is a simple addition, while natural CO2 comes from the integral of previous fluctuations.  Jeremy Shiers has a series of posts at his blog clarifying this paradigm. See Increasing CO2 Raises Global Temperature Or Does Increasing Temperature Raise CO2 Excerpts in italics with my bolds.

The following graph which shows the change in CO2 levels (rather than the levels directly) makes this much clearer.

Note the vertical scale refers to the first differential of the CO2 level not the level itself. The graph depicts that change rate in ppm per year.

There are big swings in the amount of CO2 emitted. Taking the mean as 1.6 ppmv/year (at a guess) there are +/- swings of around 1.2 nearly +/- 100%.

And, surprise surprise, the change in net emissions of CO2 is very strongly correlated with changes in global temperature.

This clearly indicates the net amount of CO2 emitted in any one year is directly linked to global mean temperature in that year.

For any given year the amount of CO2 in the atmosphere will be the sum of

  • all the net annual emissions of CO2
  • in all previous years.

For each year the net annual emission of CO2 is proportional to the annual global mean temperature.

This means the amount of CO2 in the atmosphere will be related to the sum of temperatures in previous years.

So CO2 levels are not directly related to the current temperature but the integral of temperature over previous years.

The following graph again shows observed levels of CO2 and global temperatures but also has calculated levels of CO2 based on sum of previous years temperatures (dotted blue line).

Summary:

The massive fluxes from natural sources dominate the flow of CO2 through the atmosphere.  Human CO2 from burning fossil fuels is around 4% of the annual addition from all sources. Even if rising CO2 could cause rising temperatures (no evidence, only claims), reducing our emissions would have little impact.

Resources:

CO2 Fluxes, Sources and Sinks

Who to Blame for Rising CO2?

Fearless Physics from Dr. Salby

In this video presentation, Dr. Salby provides the evidence, math and charts supporting the non-IPCC paradigm.

About 18 minutes from the start Dr. Salby demonstrates that all the warming since 1945 came from two short term events.

If these two events 1977-1981 and 1994-1998 are removed, the entire 0.6C increase disappears.  Global Warming theory asserts that adding CO2 causes a systemic change resulting in a higher temperature baseline.  Two temperature spikes, each lasting four years, are clearly episodic, not systemic.  A further proof that warming over the last 70 years arose from natural variations, not CO2 forcing.

The Real HCQ Story: What We Now Know

Steven Hatfill explains what happened to HCQ treatments against coronavirus in his Real Politics article An Effective COVID Treatment the Media Continues to Besmirch.  Excerpts in italics with my bolds.

After examples of the media disinformation campaign, Steven provides a brief recounting of what has transpired over the last half year pandemic.

So what is the real story on hydroxychloroquine? Here, briefly, is what we know:

When the COVID-19 pandemic began, a search was made for suitable antiviral therapies to use as treatment until a vaccine could be produced. One drug, hydroxychloroquine, was found to be the most effective and safe for use against the virus. Federal funds were used for clinical trials of it, but there was no guidance from Dr. Anthony Fauci or the NIH Treatment Guidelines Panel on what role the drug would play in the national pandemic response. Fauci seemed to be unaware that there actually was a national pandemic plan for respiratory viruses.

Following a careful regimen developed by doctors in France, some knowledgeable practicing U.S. physicians began prescribing hydroxychloroquine to patients still in the early phase of COVID infection. Its effects seemed dramatic. Patients still became sick, but for the most part they avoided hospitalization. In contrast — and in error — the NIH-funded studies somehow became focused on giving hydroxychloroquine to late-presenting hospitalized patients. This was in spite of the fact that unlike the drug’s early use in ambulatory patients, there was no real data to support the drug’s use in more severe hospitalized patients.

By April, it was clear that roughly seven days from the time of the first onset of symptoms, a COVID-19 infection could sometimes progress into a more radical late phase of severe disease with inflammation of the blood vessels in the body and immune system over-reactions. Many patients developed blood clots in their lungs and needed mechanical ventilation. Some needed kidney dialysis. In light of this pathological carnage, no antiviral drug could be expected to show much of an effect during this severe second stage of COVID.

On April 6, 2020, an international team of medical experts published an extensive study of hydroxychloroquine in more than 130,000 patients with connective tissue disorders. They reaffirmed that hydroxychloroquine was a safe drug with no serious side effects. The drug could safely be given to pregnant women and breast-feeding mothers. Consequently, countries such as China, Turkey, South Korea, India, Morocco, Algeria, and others began to use hydroxychloroquine widely and early in their national pandemic response. Doctors overseas were safely prescribing the drug based on clinical signs and symptoms because widespread testing was not available.

However, the NIH promoted a much different strategy for the United States. The “Fauci Strategy” was to keep early infected patients quarantined at home without treatment until they developed a shortness of breath and had to be admitted to a hospital. Then they would they be given hydroxychloroquine. The Food and Drug Administration cluelessly agreed to this doctrine and it stated in its hydroxychloroquine Emergency Use Authorization (EUA) that “hospitalized patients were likely to have a greater prospect of benefit (compared to ambulatory patients with mild illness).”

In reality just the opposite was true. This was a tragic mistake by Fauci and FDA Commissioner Dr. Stephen Hahn and it was a mistake that would cost the lives of thousands of Americans in the days to come.

At the same time, accumulating data showed remarkable results if hydroxychloroquine were given to patients early, during a seven-day window from the time of first symptom onset. If given during this window, most infections did not progress into the severe, lethal second stage of the disease. Patients still got sick, but they avoided hospitalization or the later transfer to an intensive care unit. In mid-April a high-level memo was sent to the FDA alerting them to the fact that the best use for hydroxychloroquine was for its early use in still ambulatory COVID patients. These patients were quarantined at home but were not short of breath and did not yet require supplemental oxygen and hospitalization.

Failing to understand that COVID-19 could be a two-stage disease process, the FDA ignored the memo and, as previously mentioned, it withdrew its EUA for hydroxychloroquine based on flawed studies and clinical trials that were applicable only to late-stage COVID patients.

By now, however, some countries had already implemented early, aggressive, outpatient community treatment with hydroxychloroquine and within weeks were able to minimize their COVID deaths and bring their national pandemic under some degree of control.

In countries such as Great Britain and the United States, where the “Fauci-Hahn Strategy” was followed, there was a much higher death rate and an ever-increasing number of cases. COVID patients in the U.S. would continue to be quarantined at home and left untreated until they developed shortness of breath. Then they would be admitted to the hospital and given hydroxychloroquine outside the narrow window for the drug’s maximum effectiveness.

In further contrast, countries that started out with the “Fauci-Hahn Doctrine” and then later shifted their policy towards aggressive outpatient hydroxychloroquine use, after a brief lag period also saw a stunning rapid reduction in COVID mortality and hospital admissions.

Finally, several nations that had started using an aggressive early-use outpatient policy for hydroxychloroquine, including France and Switzerland, stopped this practice when the WHO temporarily withdrew its support for the drug. Five days after the publication of the fake Lancet study and the resulting media onslaught, Swiss politicians banned hydroxychloroquine use in the country from May 27 until June 11, when it was quickly reinstated.

The consequences of suddenly stopping hydroxychloroquine can be seen by examining a graph of the Case Fatality Ratio Index (nrCFR) for Switzerland. This is derived by dividing the number of daily new COVID fatalities by the new cases resolved over a period with a seven-day moving average. Looking at the evolution curve of the CFR it can be seen that during the weeks preceding the ban on hydroxychloroquine, the nrCFR index fluctuated between 3% and 5%.

Following a lag of 13 days after stopping outpatient hydroxychloroquine use, the country’s COVID-19 deaths increased four-fold and the nrCFR index stayed elevated at the highest level it had been since early in the COVID pandemic, oscillating at over 10%-15%. Early outpatient hydroxychloroquine was restarted June 11 but the four-fold “wave of excess lethality” lasted until June 22, after which the nrCFR rapidly returned to its background value.

Here in our country, Fauci continued to ignore the ever accumulating and remarkable early-use data on hydroxychloroquine and he became focused on a new antiviral compound named remdesivir. This was an experimental drug that had to be given intravenously every day for five days. It was never suitable for major widespread outpatient or at-home use as part of a national pandemic plan. We now know now that remdesivir has no effect on overall COVID patient mortality and it costs thousands of dollars per patient.

Hydroxychloroquine, by contrast, costs 60 cents a tablet, it can be taken at home, it fits in with the national pandemic plan for respiratory viruses, and a course of therapy simply requires swallowing three tablets in the first 24 hours followed by one tablet every 12 hours for five days.

There are now 53 studies that show positive results of hydroxychloroquine in COVID infections. There are 14 global studies that show neutral or negative results — and 10 of them were of patients in very late stages of COVID-19, where no antiviral drug can be expected to have much effect. Of the remaining four studies, two come from the same University of Minnesota author. The other two are from the faulty Brazil paper, which should be retracted, and the fake Lancet paper, which was.

Millions of people are taking or have taken hydroxychloroquine in nations that have managed to get their national pandemic under some degree of control. Two recent, large, early-use clinical trials have been conducted by the Henry Ford Health System and at Mount Sinai showing a 51% and 47% lower mortality, respectively, in hospitalized patients given hydroxychloroquine. A recent study from Spain published on July 29, two days before Margaret Sullivan’s strafing of “fringe doctors,” shows a 66% reduction in COVID mortality in patients taking hydroxychloroquine. No serious side effects were reported in these studies and no epidemic of heartbeat abnormalities.

This is ground-shaking news. Why is it not being widely reported? Why is the American media trying to run the U.S. pandemic response with its own misinformation?

Steven Hatfill is a veteran virologist who helped establish the Rapid Hemorrhagic Fever Response Teams for the National Medical Disaster Unit in Kenya, Africa. He is an adjunct assistant professor in two departments at the George Washington University Medical Center where he teaches mass casualty medicine. He is principle author of the prophetic book “Three Seconds Until Midnight — Preparing for the Next Pandemic,” published by Amazon in 2019.

.

Resisting the PC “Karens”

In the social media it has become common to refer to someone who scolds or punishes you for your behavior as “Karen being Karen.” It started with a stereotype of arrogant entitled white women who put down others lacking their privileged refinement. Since the return of the BLM movement many are using the label for a racist tone dismissive of white people generally.

Leaving aside the racist connotation, I am focusing on the Karen role of enforcing politically correct behavior. For example, consider the recent Central Park incident in which a woman called Amy Cooper called the cops on a black man called Christian Cooper (no relation) and claimed that he was harassing her when in truth he was reprimanding her for letting her dog off its leash in a part of the park where you’re not meant to do that. Amy behaved badly in this incident. But as Robert A George argued in the New York Daily News: ‘[Christian] is the “Karen” in this encounter, deciding to enforce park rules unilaterally and to punish “intransigence” ruthlessly.’ Amy Cooper’s life has been shattered by this Karen-shaming incident: she lost her job and her dog.

Regardless of racial or gender identity, the “Karenness Quality” is this self-righteous public shaming of others for not behaving according to Karen’s Rules. For example, note the flip-flop of the mayor of Olympia, Washington. She was fine with the Black Lives Matter protests that followed George Floyd’s death in police custody. But that was until vandals damaged her home, according to reports. Changing her mind about the BLM protests when she was damaged personally, Mayor Cheryl Selby of Olympia now refers to the protests as “domestic terrorism,” according to The Olympian. “I’m really trying to process this,” Selby told the newspaper Saturday, after the rioters’ Friday night spree left her front door and porch covered with spray-painted messages. “It’s like domestic terrorism. It’s unfair.”

Karenism has this moral purity abstracted from personal experience with the hardships involved. Karen exemplar Marie Antoinette famously responded to the plight of breadless peasants with her “Let them eat cake.”

Karens are having a field day with The Wu Flu pandemania, such that I am in violation just for referring to the Chinese origin of this contagion. The media weaponizing the virus fear factor triggers the inner Karens to confront, denounce and denigrate others as threats to personal health and well being. You can see it when in a store, another customer scolds you for not wearing your mask properly, or going the wrong direction in the aisle. Or when Governor Karen Cuomo of NY denounces Florida or Georgia for their policies, while his state sets records for Wu Flu deaths per million.

There are various ways of responding to the Karens of this world. Comedian Steve Martin was famous for his reply to PC critics.

When the scolding is related to trivial procedural details, it’s appropriate to respond with: “Whatever.”

Then there’s Jimbob’s approach which involves switching the context to expose the absurdity of Karen’s challenge.

What Causes Rising Atmospheric CO2?

nasa_carbon_cycle_2008

This post is prompted by a recent exchange with those reasserting the “consensus” view attributing all additional atmospheric CO2 to humans burning fossil fuels.

The IPCC doctrine which has long been promoted goes as follows. We have a number over here for monthly fossil fuel CO2 emissions, and a number over there for monthly atmospheric CO2. We don’t have good numbers for the rest of it-oceans, soils, biosphere–though rough estimates are orders of magnitude higher, dwarfing human CO2.  So we ignore nature and assume it is always a sink, explaining the difference between the two numbers we do have. Easy peasy, science settled.

What about the fact that nature continues to absorb about half of human emissions, even while FF CO2 increased by 60% over the last 2 decades? What about the fact that so far in 2020 FF CO2 has declined significantly with no discernable impact on rising atmospheric CO2?

These and other issues are raised by Murray Salby and others who conclude that it is not that simple, and the science is not settled. And so these dissenters must be cancelled lest the narrative be weakened.

The non-IPCC paradigm is that atmospheric CO2 levels are a function of two very different fluxes. FF CO2 changes rapidly and increases steadily, while Natural CO2 changes slowly over time, and fluctuates up and down from temperature changes. The implications are that human CO2 is a simple addition, while natural CO2 comes from the integral of previous fluctuations.  Jeremy Shiers has a series of posts at his blog clarifying this paradigm. See Increasing CO2 Raises Global Temperature Or Does Increasing Temperature Raise CO2 Excerpts in italics with my bolds.

The following graph which shows the change in CO2 levels (rather than the levels directly) makes this much clearer.

Note the vertical scale refers to the first differential of the CO2 level not the level itself. The graph depicts that change rate in ppm per year.

There are big swings in the amount of CO2 emitted. Taking the mean as 1.6 ppmv/year (at a guess) there are +/- swings of around 1.2 nearly +/- 100%.

And, surprise surprise, the change in net emissions of CO2 is very strongly correlated with changes in global temperature.

This clearly indicates the net amount of CO2 emitted in any one year is directly linked to global mean temperature in that year.

For any given year the amount of CO2 in the atmosphere will be the sum of

  • all the net annual emissions of CO2
  • in all previous years.

For each year the net annual emission of CO2 is proportional to the annual global mean temperature.

This means the amount of CO2 in the atmosphere will be related to the sum of temperatures in previous years.

So CO2 levels are not directly related to the current temperature but the integral of temperature over previous years.

The following graph again shows observed levels of CO2 and global temperatures but also has calculated levels of CO2 based on sum of previous years temperatures (dotted blue line).

Summary:

The massive fluxes from natural sources dominate the flow of CO2 through the atmosphere.  Human CO2 from burning fossil fuels is around 4% of the annual addition from all sources. Even if rising CO2 could cause rising temperatures (no evidence, only claims), reducing our emissions would have little impact.

Resources:

CO2 Fluxes, Sources and Sinks

Who to Blame for Rising CO2?

Fearless Physics from Dr. Salby

In this video presentation, Dr. Salby provides the evidence, math and charts supporting the non-IPCC paradigm.

About 18 minutes from the start Dr. Salby demonstrates that all the warming since 1945 came from two short term events.

If these two events 1977-1981 and 1994-1998 are removed, the entire 0.6C increase disappears.  Global Warming theory asserts that adding CO2 causes a systemic change resulting in a higher temperature baseline.  Two temperature spikes, each lasting four years, are clearly episodic, not systemic.  A further proof that warming over the last 70 years arose from natural variations, not CO2 forcing.