The LIA Warming Rebound Is Over

Figure 1. Graph showing the number of volcanoes reported to have been active each year since 1800 CE. Total number of volcanoes with reported eruptions per year (thin upper black line) and 10-year running mean of same data (thick upper red line). Lower lines show only the annual number of volcanoes producing large eruptions (>= 0.1 km3 of tephra or magma) and scale is enlarged on the right axis; thick red lower line again shows 10-year running mean. Global Volcanism Project Discussion

Thanks to Dr. Francis Manns for drawing my attention to the role of Volcanoes as a climate factor, particularly related to the onset of the Little Ice Age (LIA), 1400 to 1900 AD. I was aware that the temperature record since about 1850 can be explained by a steady rise of 0.5C per century rebound overlaid with a quasi-60 year cycle, most likely oceanic driven. See below Dr. Syun Akasofu 2009 diagram from his paper Two Natural Components of Recent Warming.
When I presented this diagram to my warmist friends, they would respond, “But you don’t know what caused the LIA or what ended it!” To which I would say, “True, but we know it wasn’t due to burning fossil fuels.” Now I find there is a body of evidence suggesting what caused the LIA and why the temperature rebound may be over. Part of it is a familiar observation that the LIA coincided with a period when the sun was lacking sunspots, the Maunder Minimum.

Not to be overlooked is the climatic role of volcano activity inducing deep cooling patterns such as the LIA.  Jihong Cole-Dai explains in a paper published 2010 entitled Volcanoes and climate. Excerpt in italics with my bolds.

There has been strong interest in the role of volcanism during the climatic episodes of Medieval Warm Period (MWP,800–1200 AD) and Little Ice Age (LIA, 1400–1900AD), when direct human influence on the climate was negligible. Several studies attempted to determine the influence of solar forcing and volcanic forcing and came to different conclusions: Crowley and colleagues suggested that increased frequency of stratospheric eruptions in the seventeenth century and again in the early nineteenth century was responsible in large part for LIA. Shindell et al. concluded that LIA is the result of reduced solar irradiance, as seen in the Maunder Minimum of sunspots, during the time period. Ice core records show that the number of large volcanic eruptions between 800 and 1100 AD is possibly small (Figure 1), when compared with the eruption frequency during LIA. Several researchers have proposed that more frequent large eruptions during the thirteenth century(Figure 1) contributed to the climatic transition from MWP to LIA, perhaps as a part of the global shift from a warmer to a colder climate regime. This suggests that the volcanic impact may be particularly significant during periods of climatic transitions.

How volcanoes impact on the atmosphere and climate

Alan Robock explains Climatic Impacts of Volcanic Eruptions in Chapter 53 of the Encyclopedia of Volcanoes.  Excerpts in italics with my bolds.

The major component of volcanic eruptions is the matter that emerges as solid, lithic material or solidifies into large particles, which are referred to as ash or tephra. These particles fall out of the atmosphere very rapidly, on timescales of minutes to a few days, and thus have no climatic impacts but are of great interest to volcanologists, as seen in the rest of this encyclopedia. When an eruption column still laden with these hot particles descends down the slopes of a volcano, this pyroclastic flow can be deadly to those unlucky enough to be at the base of the volcano. The destruction of Pompeii and Herculaneum after the AD 79 Vesuvius eruption is the most famous example.

Volcanic eruptions typically also emit gases, with H2O, N2, and CO2 being the most abundant. Over the lifetime of the Earth, these gases have been the main source of the Earth’s atmosphere and ocean after the primitive atmosphere of hydrogen and helium was lost to space. The water has condensed into the oceans, the CO2 has been changed by plants into O2 or formed carbonates, which sink to the ocean bottom, and some of the C has turned into fossil fuels. Of course, we eat plants and animals, which eat the plants, we drink the water, and we breathe the oxygen, so each of us is made of volcanic emissions. The atmosphere is now mainly composed of N2 (78%) and O2 (21%), both of which had sources in volcanic emissions.

Of these abundant gases, both H2O and CO2 are important greenhouse gases, but their atmospheric concentrations are so large (even for CO2 at only 400 ppm in 2013) that individual eruptions have a negligible effect on their concentrations and do not directly impact the greenhouse effect. Global annually averaged emissions of CO2 from volcanic eruptions since 1750 have been at least 100 times smaller than those from human activities. Rather the most important climatic effect of explosive volcanic eruptions is through their emission of sulfur species to the stratosphere, mainly in the form of SO2, but possibly sometimes as H2S. These sulfur species react with H2O to form H2SO4 on a timescale of weeks, and the resulting sulfate aerosols produce the dominant radiative effect from volcanic eruptions.

The major effect of a volcanic eruption on the climate system is the effect of the stratospheric cloud on solar radiation (Figure 53.1). Some of the radiation is scattered back to space, increasing the planetary albedo and cooling the Earth’s atmosphere system. The sulfate aerosol particles (typical effective radius of 0.5 mm, about the same size as the wavelength of visible light) also forward scatter much of the solar radiation, reducing the direct solar beam but increasing the brightness of the sky. After the 1991 Pinatubo eruption, the sky around the sun appeared more white than blue because of this. After the El Chicho´n eruption of 1982 and the Pinatubo eruption of 1991, the direct radiation was significantly reduced, but the diffuse radiation was enhanced by almost as much. Nevertheless, the volcanic aerosol clouds reduced the total radiation received at the surface.

Crowley et al 2008 go into the details in their paper Volcanism and the Little Ice Age. Excerpts in italics with my bolds.

Although solar variability has often been considered the primary agent for LIA cooling, the most comprehensive test of this explanation (Hegerl et al., 2003) points instead to volcanism being substantially more important, explaining as much as 40% of the decadal-scale variance during the LIA. Yet, one problem that has continually plagued climate researchers is that the paleo-volcanic record, reconstructed from Antarctic and Greenland ice cores, cannot be well calibrated against the instrumental record. This is because the primary instrumental volcano reconstruction used by the climate community is that of Sato et al. (1993), which is relatively poorly constrained by observations prior to 1960 (especially in the southern hemisphere).

Here, we report on a new study that has successfully calibrated the Antarctic sulfate record of volcanism from the 1991 eruptions of Pinatubo (Philippines) and Hudson (Chile) against satellite aerosol optical depth (AOD) data (AOD is a measure of stratospheric transparency to incoming solar radiation). A total of 22 cores yield an area-weighted sulfate accumulation rate of 10.5 kg/km2 , which translates into a conversion rate for AOD of 0.011 AOD/ kg/km2 sulfate. We validated our time series by comparing a canonical growth and decay curve for eruptions for Krakatau (1883), the 1902 Caribbean eruptions (primarily Santa Maria), and the 1912 eruption of Novarupta/Katmai (Alaska)

We therefore applied the methodology to part of the LIA record that had some of the largest temperature changes over the last millennium.

Figure 2: Comparison of 30-90°N version of ice core reconstruction with Jones et al. (1998) temperature reconstruction over the interval 1630-1850. Vertical dashed lines denote levels of coincidence between eruptions and reconstructed cooling. AOD = Aerosol Optical Depth.

The ice core chronology of volcanoes is completely independent of the (primarily) tree ring based temperature reconstruction. The volcano reconstruction is deemed accurate to within 0 ± 1 years over this interval. There is a striking agreement between 16 eruptions and cooling events over the interval 1630-1850. Of particular note is the very large cooling in 1641-1642, due to the concatenation of sulfate plumes from two eruptions (one in Japan and one in the Philippines), and a string of eruptions starting in 1667 and culminating in a large tropical eruption in 1694 (tentatively attributed to Long Island, off New Guinea). This large tropical eruption (inferred from ice core sulfate peaks in both hemispheres) occurred almost exactly at the beginning of the coldest phase of the LIA in Europe and represents a strong argument against the implicit link of Late Maunder Minimum (1640-1710) cooling to solar irradiance changes.

Figure 1: Comparison of new ice core reconstruction with various instrumental-based reconstructions of stratospheric aerosol forcing. The asterisks refer to some modification to the instrumental data; for Sato et al. (1993) and the Lunar AOD, the asterisk refers to the background AOD being removed for the last 40 years. For Stothers (1996), it refers to the fact that instrumental observations for Krakatau (1883) and the 1902 Caribbean eruptions were only for the northern hemisphere. To obtain a global AOD for these estimates we used Stothers (1996) data for the northern hemisphere and our data for the southern hemisphere. The reconstruction for Agung eruption (1963) employed Stothers (1996) results from 90°N-30°S and the Antarctic ice core data for 30-90°S.

During the 18th century lull in eruptions, temperatures recovered somewhat but then cooled early in the 19th century. The sequence begins with a newly postulated unknown tropical eruption in midlate 1804, which deposited sulfate in both Greenland and Antarctica. Then, there are four well-documented eruptions—an unknown tropical eruption in 1809, Tambora (1815) and a second doublet tentatively attributed in part to Babuyan (Philippines) in 1831 and Cosiguina (Nicaragua) in 1835. These closely spaced eruptions are not only large but have a temporally extended effect on climate, due to the fact that they reoccur within the 10-year recovery timescale of the ocean mixed layer.

The ocean has not recovered from the first eruption so the second eruption drives the temperatures to an even lower state.

Implications for Contemporary Climate Science

In this context Dr. Francis Manns went looking for a volcanic signature in recent temperature records. His paper is Volcano and Enso Punctuation of North American Temperature: Regression Toward the Mean  Excerpts in italics with my bolds.

Abstract: Contrary to popular media and urban mythology the global warming we have experienced since the Little Ice Age is likely finished. A review of 10 temperature time series from US cities ranging from the hottest in Death Valley, CA, to possible the most isolated and remote at Key West, FL, show rebound from the Little Ice Age (which ended in the Alps by 1840) by 1870. The United States reached temperatures like modern temperatures (1950 – 2000) by about 1870, then declined precipitously principally caused by Krakatoa, and a series of other violent eruptions. Nine of these time series started when instrumental measurement was in its infancy and the world was cooled by volcanic dust and sulphate spewed into the atmosphere and distributed by the jet streams. These ten cities represent a sample of the millions of temperature measurements used in climate models. The average annual temperatures are useful because they account for seasonal fluctuations. In addition, time series from these cities are punctuated by El Nino Southern Oscillation (ENSO).

As should be expected, temperature at each city reacted differently to differing events. Several cities measured the effects of Krakatoa in 1883 while only Death Valley, CA and Berkeley CA sensed the minor new volcano Paricutin in Michoacán, Mexico. The Key West time series shows rapid rebound from the Little Ice Age as do Albany, NY, Harrisburg, PA, and Chicago. IL long before the petroleum-industrial revolution got into full swing. Recording at most sites started during a volcanic induced temperature minimum thus giving an impression of global warming to which industrial carbon dioxide is persuasively held responsible. Carbon dioxide, however, cannot be proven responsible for these temperatures. These and likely subsequent temperatures could be the result of regression to the normal equilibrium temperatures of the earth (for now). If one were to remove the volcanic punctuation and El Nino Southern Oscillation (ENSO) input many would display very little alarming warming from 1815 to 2000. This review illustrates the weakness of linear regression as a measure of change. If there is a systemic reason for the global warming hypothesis, it is an anthropogenic error in both origin and termination. ENSO compliments and confirms the validity of NOAA temperature data. Temperatures since 2000 during the current hiatus are not available because NOAA has closed the public website.

Example of time series from Manns. Numbers refer to major named volcano eruptions listed in his paper.  For instance, #3 was Krakatoa

The cooling effect is said to have lasted for 5 years after Krakatoa erupted – from 1883 to 1888. Examination of these charts, However, shows that, e.g., Krakatoa did not add to the cooling effect from earlier eruptions of Cosaguina in 1835 and Askja in 1875. The temperature charts all show rapid rebound to equilibrium temperature for the region affected in a year or two at most.

Manns Map

Fourteen major volcanic eruptions, however, were recorded between 1883 and 1918 (Robock, 2000, and this essay). Some erupted for days or weeks and some were cataclysmic and shorter. The sum of all these eruptions from Krakatoa onward effected temperatures early in the instrumental age. Judging from wasting glaciers in the Alps, abrupt retreat began about 1860).

Manns Conclusions:
1)Four of these time series (Albany, Harrisburg, Chicago and Key West) show recovery to the range of today’s temperatures by 1870 before the eruption of Askja in 1875. The temperature rebounded very quickly after the Little Ice Age in the northern hemisphere.

Manns ENSO Map

2)Volcanic eruptions and unrelated huge swings shown from ENSO largely rule global temperature. Volcanic history and the El Nino Southern Oscillation (ENSO) trump all other increments of temperature that may be hidden in the lists.

3)The sum of the eruptions from Krakatoa (1883) to Katla (1918) and Cerro Azul (1932) was a cold start for climate models.

4)It is beyond doubt that academic and bureau climate models use data that was gathered when volcanic activity had depressed global temperature. The cluster from Krakatoa to Katla (1883 -1918) were global.

5)Modern events, Mount Saint Helens and Pinatubo, moreover, were a fraction of the event intensity of the late 19th and early 20th centuries eruptions.

6) The demise of frequent violent volcanos has allowed the planet to regress toward a norm (for now).

Summary

These findings describe a natural process by which a series of volcanoes along with a period of quiet solar cycles ended the Medieval Warm Period (MWP), chilling the land and inducing deep oceanic cooling resulting in the Little Ice Age. With much less violent volcanic activity in the 20th century, coincidental with typically active solar cycles, a Modern Warm Period ensued with temperatures rebounding back to approximately the same as before the LIA.

This suggests that humans and the biosphere were enhanced by a warming process that has ended. The solar cycles are again going quiet and are forecast to continue that way. Presently, volcanic activity has been routine, showing no increase over the last 100 years. No one knows how long will last the current warm period, a benefit to us from the ocean recovering after the LIA. But future periods are as likely to be cooler than to be warmer compared to the present.

Scientific vs. Social Authenticity

Credit: Stanislaw Pytel Getty Images

This post was triggered by an essay in Scientific American Authenticity under Fire by Scott Barry Kaufman. He raises modern issues and expresses a social and psychological sense of authenticity that left me unsatisfied.  So following that, I turn to a scientific standard much richer in meaning and closer to my understanding.

Social Authenticity

Researchers are calling into question authenticity as a scientifically viable concept

Authenticity is one of the most valued characteristics in our society. As children we are taught to just “be ourselves”, and as adults we can choose from a large number of self-help books that will tell us how important it is to get in touch with our “real self”. It’s taken as a given by everyone that authenticity is a real thing and that it is worth cultivating.

Even the science of authenticity has surged in recent years, with hundreds of journal articles, conferences, and workshops. However, the more that researchers have put authenticity under the microscope, the more muddied the waters of authenticity have become.

Many common ideas about authenticity are being overturned.
Turns out, authenticity is a real mess.

One big problem with authenticity is that there is a lack of consensus among both the general public and among psychologists about what it actually means for someone or something to be authentic. Are you being most authentic when you are being congruent with your physiological states, emotions, and beliefs, whatever they may be?

Another thorny issue is measurement. Virtually all measures of authenticity involve self-report measures. However, people often do not know what they are really like or why they actually do what they do. So tests that ask people to report how authentic they are is unlikely to be a truly accurate measure of their authenticity.

Perhaps the thorniest issue of them all though is the entire notion of the “real self”. The humanistic psychotherapist Carl Rogers noted that many people who seek psychotherapy are plagued by the question “Who am I, really?” While people spend so much time searching for their real self, the stark reality is that all of the aspects of your mind are part of you.

So what is this “true self” that people are always talking about? Once you take a closer scientific examination, it seems that what people refer to as their “true self” really is just the aspects of themselves that make them feel the best about themselves.

Even more perplexing, it turns out that most people’s feelings of authenticity have little to do with acting in accord with their actual nature. The reality appears to be quite the opposite. All people tend to feel most authentic when having the same experiences, regardless of their unique personality.

Another counterintuitive finding is that people actually tend to feel most authentic when they are acting in socially desirable ways, not when they are going against the grain of cultural dictates (which is how authenticity is typically portrayed). On the flip side, people tend to feel inauthentic when they are feeling socially isolated, or feel as though they have fallen short of the standards of others.

Therefore, what people think of as their true self may actually just be what people want to be seen as. According to social psychologist Roy Baumeister, we will report feeling highly authentic and satisfied when the way others think of us matches up with how we want to be seen, and when our actions “are conducive to establishing, maintaining, and enjoying our desired reputation.”

Conversely, Baumeister argues that when people fail to achieve their desired reputation, they will dismiss their actions as inauthentic, as not reflecting their true self (“That’s not who I am”). As Baumeister notes, “As familiar examples, such repudiation seems central to many of the public appeals by celebrities and politicians caught abusing illegal drugs, having illicit sex, embezzling or bribing, and other reputation-damaging actions.”

Kaufman Conclusion

As long as you are working towards growth in the direction of who you truly want to be, that counts as authentic in my book regardless of whether it is who you are at this very moment. The first step to healthy authenticity is shedding your positivity biases and seeing yourself for who you are, in all of your contradictory and complex splendor. Full acceptance doesn’t mean you like everything you see, but it does mean that you’ve taken the most important first step toward actually becoming the whole person you most wish to become. As Carl Rogers noted, “the curious paradox is that when I accept myself just as I am, then I can change.”

My Comment:
Kaufman describes contemporary ego-centric group-thinking, which leads to the philosophical dead end called solipsism. As an epistemological position, solipsism holds that knowledge of anything outside one’s own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind.

His discussion proves the early assertion that authenticity (in the social or psychological sense) is indeed a mess. The author finds no objective basis to determine fidelity to reality, thus leaving everyone struggling whether to be self-directed or other-directed. As we know from Facebook, most resolve that conflict by competing to see who can publish the most selfies while acquiring the most “friends.”This is the best Scientific American can do? The swamp is huge and deep indeed.

It reminds me of what Ross Pomeroy wrote at Real Science: “Psychology, as a discipline, is a house made of sand, based on analyzing inherently fickle human behavior, held together with poorly-defined concepts, and explored with often scant methodological rigor. Indeed, there’s a strong case to be made that psychology is barely a science.”

Scientific Authenticity

In contrast, let us consider some writing by Philip Kanarev, A practicing physicist, he is concerned with the demise of scientific thinking and teaching and calls for a return to fundamentals. His essay is Scientific Authenticity Criteria by Ph. M. Kanarev in the General Science Journal.  Excerpts in italics with my bolds.

A conjunction of scientific results in the 21st century has reached a level that provides an opportunity to find and to systematize the scientific authenticity criteria of precise knowledge already gained by mankind.

Neither Euclid, nor Newton gave precise definitions of the notions of an axiom, a postulate and a hypothesis. As a result, Newton called his laws the axioms, but it was in conflict with the Euclidean ideas concerning the essence of the axioms. In order to eliminate these contradictions, it was necessary to give a definition not only to the notions of the axiom and the postulate, but also to the notion of the hypothesis. This necessity is stipulated by the fact that any scientific research begins with an assumption regarding the reason causing a phenomenon or process being studied. A formulation of this assumption is a scientific hypothesis.

Thus, the axioms and the postulates are the main criteria of authenticity of any scientific result.

An axiom is an obvious statement, which requires no experimental check and has no exceptions. Absolute authenticity of an axiom appears from this definition. It protects it by a vivid connection with reality. A scientific value of an axiom does not depend on its recognition; that is why disregarding an axiom as a scientific authenticity criterion is similar to ineffectual scientific work.

A postulate is a non-obvious statement, its reliability being proven in the way of experiment or a set of theoretic results originating from the experiments. The reliability of a postulate is determined by the level of acknowledgement by the scientific community. That’s why its value is not absolute.

An hypothesis is an unproven statement, which is not a postulate. A proof can be theoretical and experimental. Both proofs should not be at variance with the axioms and the recognized postulates. Only after that, hypothetical statements gain the status of postulates, and the statements, which sum up a set of axioms and postulates, gain the status of a trusted theory.

The first axioms were formulated by Euclid. Here are some of them:
1 – To draw a straight line from any point to any point.
2 – To produce a finite straight line continuously in a straight line.
3 – That all right angles equal one another.

Euclidean formulation concerning the parallelism of two straight lines proved to be less concise. As a result, it was questioned and analyzed in the middle of the 19th century. It was accepted that two parallel straight lines cross at infinity. Despite a complete absence of evidence of this statement, the status of an axiom was attached to it. Mankind paid a lot for such an agreement among the scientists. All theories based on this axiom proved to be faulty. The physical theories of the 20th century proved to be the principal ones among them.

In order to understand the complicated situation being formed, one has to return to Euclidean axioms and assess their completeness. It has turned out that there are no axioms, which reflect the properties of the primary elements of the universe (space, matter and time), among those of Euclid. There are no phenomena, which could compress space, stretch it or distort it, in the nature; that is why space is absolute. There are no phenomena, which change the rate of the passing of time in nature. Time does not depend on anything; that’s why we have every reason to consider time absolute. The absolute nature of space and time has been acknowledged by scientists since Euclidean times. But when his axiom concerning the parallelism of straight lines was disputed, the ideas of relativity of space and time as well as the new theories, which were based on these ideas and proved (as we noted) to be faulty, appeared.

A law of acknowledgement of new scientific achievements was introduced by Max Planck. He formulated it in the following way: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it”. Our attempt to report the reliability of this law to the authorities is in the history of science an unnecessary intention. Certainly, time appeared in space only after matter. But still we do not know of a source that produces elementary particles – building blocks of the material world. That’s why we have no reason to consider matter absolute. But it does not prevent us from paying attention to an interconnection of the primary elements of the universe: space, matter and time. They exist only together and regardless of each other. This fact is vivid, and we have every reason to consider an indivisible existence of space, matter and time as an axiomatic one, and to call the axiom, which reflects this fact, the Unity axiom. The philosophic essence of this axiom has been noted long ago, but the practitioners of the exact sciences have failed to pay attention to the fact that it is implemented in the experimental and analytical processes of cognition of the world. When material bodies move, the mathematical description of this motion should be based on the Unity axiom. It appears from this axiom, that an axis of motion of any object is the time function. Almost all physical theories of the 20th century are in conflict with the Unity axiom. It is painful to write about it in detail.

Let us go on analyzing the role of postulates as scientific authenticity criteria. First of all, let us recollect the famous postulate by Niels Bohr concerning the orbital motion of the electrons in atoms. This catchy model of the process of the interaction of the electrons in the atoms goes on being formed in the mind of the pupils in school despite of the fact that its impropriety has been proven more than 10 years ago.

The role of Niels Bohr’s generalized postulate is great. Practically, it is used in the whole of modern chemistry and the larger part of physics. This postulate is based on the calculation of the spectrum of the hydrogen atom. But it is impossible to calculate the spectrum of the first orbit of the helium atom (which occupies the second place in Mendeleev’s table,) with Bohr’s postulate, to say nothing of the spectra of more complicated atoms and ions. It was enough to dispute the authenticity of Bohr’s postulate, but the mission of doubt has fallen to our lot for some reason. Two years were devoted to decoding the spectrum of the first electron of the helium atom. As a result, the law of formation of the spectra of atoms and ions has taken place as well as the law of the change of binding energy of the electron with the protons of the nuclei when energy-jumps take place in the atoms. It has turned out that there is no energy of orbital motion of the electrons in these laws; there are only the energies of their linear interaction with the protons of the nuclei.

Thereafter, it has become clear that only elementary particle models can play the role of the scientific result authenticity criteria in cognition of the micro-world. From the analysis of behaviour of these models, one should derive the mathematical models, which have been ascertained analytically long ago, and describe their behaviour in the experiments that have been carried out earlier.

The ascertained models of the photons of all frequencies, the electron, the proton and the neutron meet the above-mentioned requirements. They are interconnected with each other by such a large set of theoretical and experimental information, whose impropriety cannot be proven. This is the main feature of the proximity to reality of the ascertained models of the principle elementary particles. Certainly, the process of their generation has begun from a formulation of the hypothesis concerning their structures. Sequential development of the description of these structures and their behaviour during the interactions extended the range of experimental data where the parameters of the elementary particles and their interactions were registered. For example, the formation and behaviour of electrons are governed by more than 20 constants.

We have every reason to state that the models of the photons, the electron, the proton and the neutron, which have been ascertained by us, as well as the principles of formation of the nuclei, the atoms, the ions, the molecules and the clusters already occupy a foundation for the postulates, and new scientific knowledge will cement its strength.

Science has a rather complete list of criteria in order to estimate the authenticity of scientific investigative results. The axioms (the obvious statements, which require no experimental check and have no exceptions,) occupy the first place; the second place is occupied by the postulates. If the new theory is in conflict with at least one axiom, it will be rejected immediately by the scientific community without discussion. If the experimental data, which are in conflict with any postulate (as it happened, for example, to the Newton’s first law), appear, the future scientific community, which has learned a lesson from scientific cowardice of the academic elite of the 20th century, will submit such a postulate to a collective analysis of its authenticity.

Kanarev Conclusion

To the academicians who have made many mistakes in knowledge of the fields of physics and chemistry, we wish them to recover their sight in old age and be glad that these mistakes are already amended. It is time to understand that a prolongation of stuffing the heads of young people with faulty knowledge is similar to a crime that will be taken to heart emotionally in the near future.

The time has ended, when a diploma confirming higher education was enough in order to get a job. Now it is not a convincing argument for an employer; in order to be on the safe side, he hires a young graduate as a probationer at first as he wants to see what the graduate knows and what he is able to do. A new system of higher education has almost nullified a possibility for the student to have the skills of practical work according to his specialty and has preserved a requirement to have moronic knowledge, i.e. the knowledge which does not reflect reality.

My Summary

In Science, authenticity requires fidelity to axioms and postulates describing natural realities. It also means insisting that hypotheses be validated by experimental results. Climate science claims are not scientifically authentic unless or until confirmed by observations, and not simply projections from a family of divergent computer models. And despite all of the social support for climate hysteria, those fears are again more stuffing of nonsense into heads of youth and of the scientifically illiterate.

See Also Degrees of Climate Truth

Superhuman Renewables Targets

Faster than a speeding bullet! More powerful than a locomotive! Able to leap tall buildings in a single bound! It’s Superman.

New York is not the only climate cuckoo’s nest in the United States. Here are four more states promising efforts to install wind and solar power at rates that would exhaust Superman. EIA reports; Four states updated their renewable portfolio standards in the first half of 2019. Excerpts in italics with my bolds.

As of the end of 2018, 29 states and the District of Columbia had renewable portfolio standards (RPS), or polices that require electricity suppliers to source a certain portion of their electricity from designated renewable resources or eligible technologies. Four states—New Mexico, Washington, Nevada, and Maryland—and the District of Columbia have updated their RPS since the start of 2019.

States with legally binding RPS collectively accounted for 63% of electricity retail sales in the United States in 2018. In addition to the 29 states with binding RPS policies, 8 states have nonbinding renewable portfolio goals.

New Mexico increased its overall RPS target in March 2019 to 100% of electricity sales from carbon-free generation by 2045, up from the previous target of 20% renewable generation by 2020. The new policy only applies to investor-owned utilities; cooperative electric utilities have until 2050 to reach the 100% carbon-free generation goal. The target has intermittent goals of 50% renewable generation by 2030 and 80% renewable generation by 2040.

In April 2019, the Nevada legislature increased its RPS to 50% of sales from renewable generation by 2030, including a goal of 100% of electricity sales from clean energy by 2050. Later that month, Washington increased its RPS target to 100% of sales from carbon-neutral generation by 2045, an increase from the previous target of 15% of sales from renewable generation by 2020. In addition, the policy mandates a phaseout of coal-fired electricity generation in Washington by 2025. Nevada and Washington became the fourth and fifth states, respectively, to pass legislation for 100% clean electricity, following Hawaii, California, and New Mexico.

In May 2019, Maryland increased its overall RPS target to 50% of electricity sales from renewable generation by 2030, replacing the earlier target of 22.5% by 2024. In addition, the legislation mandates further study of the effects and the possibility of Maryland reaching 100% generation from renewables by 2040.

Update on Warming Hiatus

Figure 1. Comparison of HadCRUT3 and the latest HadCRUT4.6 Notice how all trends pivot around the 1998 El Nino peak.

Clive Best has a new post exploring the question: Whatever happened to the Global Warming Hiatus ? Excerpts in italics with my bolds and comment at end.

The last IPCC assessment in 2013 showed a clear pause in global warming lasting 16 years from 1998 to 2012 – the notorious hiatus. As a direct consequence of this AR5 estimates of climate sensitivity were reduced and CMIP5 models appeared to clearly overestimate trends. Following the first release of HadCRUT4 in 2014 the ‘headline’ then was that 2005 and 2010 were marginally warmer than 1998. This was the first dent in removing the hiatus. Since then each new version of H4 has showed further incremental warming trends, such that by 2019 the hiatus has now completely vanished. Anyone mentioning it today is likely to be ridiculed by the climate science community. So how did this reversal happen within just 5 years? I decided to find out exactly why the post 1998 temperature record changed so dramatically in such a short period of time.

In what follows I always use the same algorithm as CRU for the station data and then blend that with the Hadley SST data. I have checked that I can reproduce exactly the latest HadCRUT4.6 results based on the current 7820 stations from CRU merged with HadSST3. Back in 2012 I downloaded the original station data from CRU – CRUTEM3. I have also downloaded the latest CRUTEM4 station data.

Figure 1 above compares the latest HadCRUT4.6 results with the last version of HadCRUT3.

I had assumed that the reason for the apparent trend change was because CRUTEM4 had added many new weather stations in the Arctic (removing some in S.America as well), while additionally the SST data had also been updated (HadSST2 moved to HADSST3). However, as I show below, my assumption simply isn’t true.

To investigate I recalculated a ‘modern’ version of HadCRUT3 by using only the original 4100 stations (used by CRUTEM3) from CRUTEM4 station data.  The list of these stations are defined here. I then merged these with  both the older HadSST2 and HADSST3 to derive annual global temperature anomalies. Figure 2 shows the result. I get almost exactly the same values as the full 7820 stations in HadCRUT4. It certainly does not reproduce HadCRUT3 !

Figure 2. The black curve is based on “modern” CRUTEM3 stations combined with HADSST3 and the Yellow curve is “modern” CRUTEM3 stations with HADSST2

This result provides two conclusions.

  1. Modern CRUTEM3 stations give a different result to the original CRUTEM3 stations.
  2. SST data is not responsible for the difference between HadCRUT4 and HadCRUT3

    To confirm point 1) I used exactly the same code to regenerate HadCRUT3 temperature series using the original CRUTEM3 station data as opposed to the ‘modern’ values based on CRUTEM4.
Figure 3: Comparison of HadCRUT3 with my calculation using the original CRUTEM3 station data.

The original CRUTEM3 station data I had previously downloaded in 2012. These are combined with HADSST2 data. Now we see that the agreement with the H3 annual temperatures is very good, and indeed reproduces the hiatus.

So the conclusion is very simple. The monthly temperature values in over 4000 CRUTEM3 stations have all been continuously changed, and it is these changes alone that have resulted in transforming the 16 year long hiatus in global warming into a rising temperature trend. Furthermore all these updates have only affected temperatures AFTER 1998! Temperatures before 1998 have hardly changed at all, which is the second requirement needed to eliminate the hiatus.

P.S. I am sure there are excellent arguments as to why pair-wise ‘homogenisation’ is wonderful but why then does it only affect data after 1998 ?

Comment:

Meanwhile, UAH is showing a return to the mean annual anomaly since 1995.

New York: Climate Cuckoo’s Nest

Scene from One Flew Over the Cuckoo’s Nest

This just in from the Cuckoo’s Nest AKA New York State Legislature: N.Y. lawmakers agree to historic climate plan by Benjamin Storrow, E&E News. Excerpts in italics with my bolds. The comments are powerful displays of the brain damage from drinking too much climate kool aid.

Gov. Andrew Cuomo (D) said yesterday he has reached an agreement with legislative leaders over a bill to slash New York’s greenhouse gas emissions, setting the stage for one of the most significant state climate victories since President Trump took office.

The announcement, coming just days before the close of the legislative session, represented a big victory for climate activists, who have spent three years pushing for major legislation to curb greenhouse gases in the Empire State.

Lawmakers were still working on final amendments yesterday, but the outlines of the deal were becoming clear. The legislation calls for reducing emissions by 40% from 1990 levels by 2030 and 85% by 2050. The remaining 15% of emissions would be offset, making the state carbon neutral. The bill would also require that all electricity generation come from carbon-free sources by 2040. A Climate Action Council would be established to ensure the state meets its targets.

“I believe we have an agreement, and I believe it is going to pass,” Cuomo said in a radio interview on WAMC.

The comment ended months of speculation over the fate of climate legislation in New York. Democratic lawmakers, who seized complete control of state government when they took over the state Senate last fall, had been pushing a bill called the “Climate and Community Protection Act.” The bill would spend 40% of the state’s clean energy revenues on energy efficiency measures and renewable installations in disadvantaged communities.

That drew repeated public objections from Cuomo, who said he wanted to ensure that environmental revenue was spent on environmental programs. Ultimately, the two sides settled on a compromise: At least 35% of revenues would go to disadvantaged communities. That funding could rise as high as 40%, which would amount to $370 million in fiscal 2018-19.

“It was a question of the distribution of the funding,” Cuomo told WAMC. “I understand the politics on these issues. Everyone wants to make all these advocacy groups happy. Taxpayers’ money is taxpayers’ money. And if it’s taxpayers’ money for an environmental purpose, I want to make sure it’s going to an environmental purpose.

“This transformation to a new green economy is very expensive. We don’t have the luxury of using funding for political purposes.”

Business interests had urged Cuomo and Democratic lawmakers to slow down, saying the legislation threatened 40,000 manufacturing jobs in the state. The Business Council of New York State called zero carbon emissions “unrealistic.”

But Democratic lawmakers forged ahead, working through the weekend to iron out a deal with Cuomo before a filing deadline for legislation Sunday. They argued that:

The risks of climate change, coupled with the benefits of a green energy economy, outweighed the potential costs.

“It means that on Father’s Day, when I see my grandchildren next year, I’ll have a lot less uncertainty about their future than I did yesterday morning,”

said Democratic Assemblyman Steve Englebright, a champion of the climate legislation. “It means we are going to be in the vanguard among states, tackling a problem that will affect every jurisdiction here and around the globe.

New York will lead the way.”

State Sen. Todd Kaminsky (D) said New York’s action would send a major signal to markets, helping companies plan for a cleaner future. But ultimately, he said, lawmakers were responding to voters.

“Our constituents told us, ‘Don’t come back without doing something on climate,'” Kaminsky said. “The future is now. I think we’ve taken that important step.”

Policy mandate with teeth’

Republican control of the state Senate meant climate policy in New York had been centered in the governor’s office until this year. Cuomo has pumped out executive orders banning hydraulic fracturing, calling for the closure of the state’s remaining coal plants in 2020 and targeting a 40% reduction in emissions by 2030, among other things.

The legislation enshrines many of Cuomo’s targets into law, ensuring they will outlast the current governor. The new Climate Action Council would be required to issue recommendations on how to install 6 gigawatts of distributed solar by 2025, 9 GW of offshore wind by 2035 and 3 GW of energy storage by 2030.

But the law also represents a power shift of sorts. Environmentalists have long criticized Cuomo for failing to follow through on many of his environmental objectives. His environmental agencies have yet to produce a climate action plan, despite an executive order directing them to do so. And his coal regulation arrived earlier this year, when only two coal plants remained (Climatewire, May 10).

The legislation ensures the governor will follow through, advocates said. It would require an annual greenhouse gas inventory. The Climate Action Council would release a report every four years detailing the state’s progress (or lack thereof) toward its emissions goals.

And by putting the state’s climate commitment in law, it would open the door for citizens to sue if New York does not meet its targets.

“This isn’t just a planning document. It is a policy mandate with teeth that they can’t evade or avoid,” said Eddie Bautista, executive director of the New York City Environmental Justice Alliance.

“By inserting climate justice, it becomes a plan about humanity; it becomes a plan about the dignity and value we place on humanity. There are ways to help all of us to catalyze economic development for all of us,”

Bautista said. “That is why so many people are excited about this plan and hope it replicates.”

Unlike many of its peers, the Empire State is casting its gaze beyond power plants, the traditional realm of state climate policy. Transportation accounts for roughly a third of the state’s emissions, followed by residential heating and cooling (16%) and electricity generation (13%). The bill would direct the Climate Action Council to recommend emissions performance standards for the transportation, building and industrial sectors.

At the same time, the legislation leaves considerable questions regarding how New York intends to slash emissions. Much would depend on the Climate Action Council and the governor, who will ultimately be charged with implementing the legislation.

New York has considerable ground to make up. The state cut emissions only 8% between 1990 and 2015, according to the most recent New York greenhouse gas inventory.

Greens acknowledged the amount of work ahead but said they are optimistic.

“Now, for the first time, we actually have our commitment on climate enshrined into law,” said Conor Bambrick, who leads the air and energy program at Environmental Advocates of New York. “That is going to allow for real long-term planning and action so New York can get serious about tackling this problem, serious about getting innovative about ways to tackle this problem and doing it equitably.”

My Comment:

Sorry New York, you are not leading the way. Germany and South Australia have already gone down this rabbit hole, and you can only surpass them by an even greater destruction to your economy and society.

Arctic Ice In Perspective

With Arctic ice melting season underway, warmists are again stoking fears about ice disappearing in the North.  In fact, the pattern of Arctic ice seen in historical perspective is not alarming. People are over-thinking and over-analyzing Arctic Ice extents, and getting wrapped around the axle (or should I say axis).  So let’s keep it simple and we can all readily understand what is happening up North.

I will use the ever popular NOAA dataset derived from satellite passive microwave sensors.  It sometimes understates the ice extents, but everyone refers to it and it is complete from 1979 to 2018.  Here’s what NOAA reports (in M km2):

We are frequently told that only the March maximums and the September minimums matter, since the other months are only transitional between the two.  So the graph above shows the mean ice extent, averaging the two months March and September.

If I were adding this to the Ice House of Mirrors, the name would be The X-Ray Ice Mirror, because it looks into the structure of the time series.   For even more clarity and simplicity, here is the table:

NOAA NH Annual Average Ice Extents (in M km2).  Sea Ice Index v3.0 (here)

Year Average Change Rate of Change
1979 11.697
1996 11.353 -0.344 -0.020 per year
2007 9.405 -1.949 -0.177 per year
2018 9.506  +0.102 +0.009 per year

The satellites involve rocket science, but this does not.  There was a small loss of ice extent over the first 17 years, then a dramatic downturn for 11 years, 9 times the rate as before. That was followed by the current plateau with no further loss of ice extent.  All the fuss is over that middle period, and we know what caused it.  A lot of multi-year ice was flushed out through the Fram Strait, leaving behind more easily melted younger ice. The effects from that natural occurrence bottomed out in 2007.

Kwok et al say this about the Variability of Fram Strait ice flux:

The average winter area flux over the 18-year record (1978–1996) is 670,000 km2, ;7% of the area of the Arctic Ocean. The winter area flux ranges from a minimum of 450,000 km2 in 1984 to a maximum of 906,000 km2 in 1995. . .The average winter volume flux over the winters of October 1990 through May 1995 is 1745 km3 ranging from a low of 1375 km3 in the 1990 flux to a high of 2791 km3 in 1994.

https://www.researchgate.net/publication/261010602/download

Conclusion:

Some complain it is too soon to say Arctic Ice is recovering, or that 2007 is a true change point.  The same people were quick to jump on a declining period after 1996 as evidence of a “Death Spiral.”

Footnote:

No one knows what will happen to Arctic ice.

Except maybe the polar bears.

And they are not talking.

Except, of course, to the admen from Coca-Cola

We are Ignored, then Dissed, then Debated, then We Win.

Global Warming Debate Soho Forum May 8, 2019

The post title is a reference to a quote from Mahatma Gandhi who said, when facing overwhelming odds opposed by an entrenched establishment in India:

Watch: Skeptical scientist wins rare New York City climate debate against warmist scientist – Audience flips from warmist views to skeptical after debate (H/T John Ray at his blog Greenie Watch)Excerpts in italics with my bolds.

The Soho Forum, Published on May 6, 2019

Resolution: There is little or no rigorous evidence that rising concentrations of carbon dioxide are causing dangerous global warming and threatening life on the planet.

For the affirmative:

Craig Idso is the founder, former president, and currently chairman of the Center for the Study of Carbon Dioxide and Global Change. The Center was founded in 1998 as a non-profit public charity dedicated to discovering and disseminating scientific information pertaining to the effects of atmospheric carbon dioxide enrichment on climate and the biosphere.

Dr. Idso’s research has appeared many times in peer-reviewed journals, and is the author or coauthor of several books, including The Many Benefits of Atmospheric CO2 Enrichment (Vales Lake Publishing, LLC, 2011), CO2, Global Warming and Coral Reefs (Vales Lake Publishing, LLC, 2009).

Dr. Idso also serves as an adjunct scholar for the Cato Institute and as a policy advisor for the CO2Coalition, the Heartland Institute and the Committee For A Constructive Tomorrow.

For the negative:

Jeffrey Bennett is an astrophysicist and educator. He has focused his career on math and science literacy. He is the lead author of bestselling college textbooks in astronomy, astrobiology, mathematics, and statistics, and of critically acclaimed books for the general public on topics including Einstein’s theory of relativity, the search for extraterrestrial life, and the importance of math to our everyday lives.

Other career highlights include serving two years as a Visiting Senior Scientist at NASA Headquarters, proposing and helping to develop the Voyage Scale Model Solar System that resides on the National Mall in Washington, DC, and creating the freeTotality app that has helped tens of thousands of people learn how to view a total solar eclipse.

His book A Global Warming Primeris posted freely online at http://www.globalwarmingprimer.com.

Moderator: “We have the final vote. The yes vote on the resolution that there is no evidence that’s causing dangerous global warming: It began at 24% (of the skeptical yes vote supporting that position) and it went up to 46% (after the debate). So [skeptical argument] gained 22% points. That’s the number to beat (46%).

The no resolution (warmist position) started at 29%. It went up to 41% or up 11 points.” The winner of the debate is skeptical scientist Dr. Craig Idso with his resolution asserting that “There is little or no rigorous evidence that rising concentrations of carbon dioxide are causing dangerous global warming and threatening life on the planet.”

Flashback 2007: Scientific Smackdown: Skeptics Voted The Clear Winners Against Global Warming Believers in Heated NYC Debate – RealClimate.org’s Gavin Schmidt appeared so demoralized that he mused that debates equally split between believers of a climate ‘crisis’ and scientific skeptics are probably not “worthwhile” to ever agree to again.

The 2007 Debate was on the statement: Global Warming is not a Crisis. Clips, Transcripts and Results are here :
https://www.intelligencesquaredus.org/debates/global-warming-not-crisis

December 1, 2009, was the Munk Debate on the statement: Be it resolved, climate change is mankind’s defining crisis, and demands a commensurate response… 
PRO George Monbiot, Elizabeth May 
CON Bjørn Lomborg, Lord Nigel Lawson
RESULT CON gains 8%. CON wins
See: https://munkdebates.com/debates/climate-change

More Recent is the March 2020 Karoly/Tamblyn–Happer Dialogue on Global Warming at The Best Schools

Drs. Karoly and Happer argued the following theses:

Dr. Karoly: Science has established that it is virtually certain that increases of atmospheric CO2 due to burning of fossil fuels will cause climate change that will have substantial adverse impacts on humanity and on natural systems. Therefore, immediate, stringent measures to suppress the burning of fossil fuels are both justified and necessary.

Dr. Happer: There is no scientific basis for the claim that increases of atmospheric CO2 due to burning of fossil fuels will cause climate change that will have substantial adverse impacts on humanity and on natural systems. If fossil fuels are burnt responsibly to limit real pollutants like fly ash, oxides of nitrogen or sulfur, heavy metals, etc., the CO2 released will be a benefit to the world. Any resulting climate change will be moderate, and there will be very major benefits to agriculture and other plant life.

Carbon Tax Dubious Economics

How could 3508 economists be wrong? Let us count the ways.

Michael Davis writes at Regulation Magazine The signatories of the recent “Economists’ Statement on Carbon Dividends” must address some important issues. Excerpts in italics with my bolds.

Economists are disagreeable people. And it’s good that they are. Most important economic questions are complex, multi-dimensional puzzles with no obvious, simple answers. But debate and disagreement advance our understanding of the world, and so good economists debate and disagree.

If you heard that thousands of the very best economists actually did agree on something, you’d probably think that it was something glaringly obvious—maybe they issued a joint statement condemning the designated hitter rule or calling for a total ban on Super Bowl halftime shows. But those aren’t the subject of the recent “Economists’ Statement on Carbon Dividends,” signed by 3,508 economists and released by the Climate Leadership Council. The statement supports the creation of a Pigouvian tax on U.S. carbon emissions on the grounds that “global climate change is a serious problem calling for immediate national action,” and that “a carbon tax offers the most cost-effective lever to reduce carbon emissions at the scale and speed that is necessary.”

This agreement is remarkable! The environment and the economy are both complex systems. Intelligent people can agree on a few things involving them—of course, manmade global warming is real—but there is vast uncertainty about how the complex climate system interacts with the complex economic system to shape the human condition in the distant future. More importantly, core questions about climate change engage fundamental moral values about intergenerational equity. How to deal with climate change is the very epitome of a “wicked problem.”

[Note: Intelligent people also note that in the world (as opposed to models) manmade global warming has yet to be detected separately from natural global warming. I understand the author is not questioning the science or the impacts (later on), but is raising serious issues about the policy proposal.]

This is a serious proposal advanced by serious people to deal with a serious problem. But it is also a radical proposal. According to a joint study by Columbia University’s Center on Global Energy Policy and the Urban Institute–Brookings Institution Tax Policy Center, in the first year the tax would amount to about $2,000 for a family of four. No matter what is done with the tax revenues, this proposal would have far-reaching economic consequences.

And so, before we get too far along, we need a proper argument over this proposal’s merits. Here, then, are five important questions about the plan. Let’s hope these questions lead to some disagreeable, but fruitful, discussions.

QUESTION 1: What if these economists are right about the principle but wrong about the tax rate?

The principle behind the carbon tax makes perfect economic sense. The market price of any good reflects at least some of the costs of making that good. The price of a gallon of gas, for example, needs to be high enough to compensate all those who worked to get the gas into your car. But some goods—and gasoline is one of them—impose costs on others that are not reflected in the price. Economists call these costs “negative externalities.” If burning a gallon of gas causes damage to coastal property, drivers are not paying the full price of their consumption and that distorts their consumption choices. That’s unfair and inefficient.

The obvious solution is to levy a Pigouvian tax equal to the harm caused, forcing consumers to shoulder the externality cost of their consumption and, perhaps, change their consumption pattern. But we have very little idea of the magnitude of the actual harm from a ton of CO2 emissions and so we don’t really know how high this carbon tax should be. Estimates of the “Social Cost of Carbon” published by the U.S. Environmental Protection Agency indicate that a ton of CO2-equivalent released in 2020 could cause harm of as little as $5 or as much as $123. (This roughly translates to a range of between 4¢ and $1 of damage from the burning of a gallon of gas.) The $40-per-ton tax suggested by the Statement signatories is a kind of average of several disparate estimates. As such, it is almost certainly the wrong number.

These economists will, no doubt, point out that the current carbon tax of zero is also wrong. But that observation, alone, is not enough to justify the proposed tax because setting the rate in excess of the actual external harm would cause real economic damage.

The economic argument in favor of carbon taxes needs to be coupled with a clear understanding that cheap, abundant energy has been an essential part of recent human progress. Fossil fuels provide food, shelter, health care, education, the arts, and countless other goods. They are not some vile poison, and consuming fossil fuels is not a shameful sin. When something is taxed, less is consumed. If, as seems likely, we consume too much energy from carbon-based resources, a tax can help to moderate that consumption appropriately. But if the tax is too high, we will consume too little. If we consume too little, we will miss out on some of the benefits that come from fossil fuels.

Here’s another problem related to the practical question of the appropriate carbon tax rate: Many fossil fuels are already heavily taxed. For example, the average tax on motor fuels is now about 48¢ per gallon, the equivalent to a tax of $54 per ton of carbon. These taxes exist mostly to raise revenue for transportation infrastructure, not control some other externality. Should the proposed new carbon tax be in addition to those existing taxes?

QUESTION 2: Should the United States impose carbon taxes even if the rest of the world does not?

In 2019 the world will produce a bit more than 35 gigatons of CO2-equivalent emissions. The United States will contribute about 5 gigatons to that total. The U.S. Department of Energy forecasts that, in 2040, world emissions will increase to 43 gigatons while U.S. emissions will drop by a small amount. A 2018 report by the Center on Global Energy Policy at Columbia University forecasts that if we impose a tax of $50 per ton of carbon in 2020 and increase that tax by 2% per year, annual U.S. emissions will fall by 13%–29% by 2030. But by 2030, U.S. emissions will be less than 15% of the world total. Even under the best-case scenario, our carbon tax would reduce global emissions by less than 5% and climate change will continue.

If the rest of the world doesn’t join us, the U.S. carbon tax won’t matter. This leads to a related problem. If the United States levies a carbon tax, it becomes more expensive for U.S. firms to make and transport goods. That means a U.S. carbon tax will reward those countries that don’t do anything to reduce their emissions by giving those places a competitive advantage. Exploiting that advantage will likely be too much of a temptation for others—especially developing countries with desperately poor people—to ignore. It is even possible that by pushing energy-intensive production to places with no controls on carbon emissions, this policy will make global emissions worse.

QUESTION 3: Doesn’t the “border-adjustment tax” that has to be part of the plan present enormous practical and political problems?

This carbon tax should not just apply to U.S. emissions, but to foreign emissions resulting from goods imported into the United States. Assessing a border-adjustment tax on these goods would be difficult from both an economic and political perspective. For example, almost 5% of the world’s carbon emissions result from the production of cement. But different production technologies for cement and different modes of transportation result in vastly different emissions. Even though two different shipments of cement may be practically identical, they won’t have similar carbon footprints. How would U.S. authorities determine which bags of cement face what tax rates?

The political problems are also tough. First of all, to the rest of the world a border-adjustment tax would seem like a tariff. How would we impose this tax without violating treaty obligations and without inviting retaliation? Second, how would we keep the crony capitalists away from the treats? The temptation to game the system for competitive advantage would be enormous.

QUESTION 4: What about adaptation?

All but the most apocalyptic of the potential harms from global warming can be managed through some type of adaptation to the changing climate. Building practices, for example, can be changed to deal with the threat of rising sea levels. There is also the possibility of some sort of geo-engineering solution. Remember that atmospheric CO2 is an otherwise harmless substance and that the burning of fossil fuels is enormously valuable. This means that if it is less costly to adapt to the effects of a ton of CO2 emissions than it is to eliminate the carbon, we should adapt. But to the extent that the carbon tax actually works to reduce CO2 emissions, it creates disincentives to adapt.

The proponents of a tax might say that the estimates of the Social Cost of Carbon already balance adaptation costs. The problem with that argument, though, is that the most effective adaptation solutions probably haven’t been created. New technologies to deal with climate change—altering agriculture practices, geo-engineering solutions, and other initiatives we can’t currently imagine— may well prove extraordinarily effective and efficient. An effective carbon tax reduces the incentive to find those solutions that allow us to enjoy the benefits of fossil fuel use without much cost.

QUESTION 5: Isn’t economic growth much more important than lowering CO2?

Every four years, a distinguished group of analysts delivers to Congress the “National Climate Assessment.” The latest version came out last November and was full of sobering projections. Anyone who chooses to ignore the threat of global warming should read what it has to say. Among the direst warnings was a graphic showing that if the worst-case scenario played out, by 2100 the effects of global warming would reduce U.S. gross domestic product by about 15% from current projections. To put that number in perspective, during the 2008 recession GDP fell by about 1%. That was accompanied by huge increases in unemployment and economic dislocation. Between 1929 and 1933, the worst years of the Great Depression, GDP fell by about 34%. That led to tremendous misery and, arguably, a world war. Remember, too, that the potential decline of 15% of GDP in 2100 isn’t just a short-term event. As bad as the Great Depression was, the economy recovered. The scary scenario is that by failing to address global warming, we will cause future generations to suffer a huge permanent decline in GDP.

But there’s another thing to keep in mind: if the United States could boost annual GDP growth rates between now and 2100 by an additional 0.2 percentage points, by the year 2100 U.S. GDP would be more than 17% larger than is currently projected. Think about it this way: Suppose that you had to pick between two tax policies. The first would reduce U.S. carbon emissions and maybe prevent the potential loss of 15% of GDP by 2100. The second would increase annual growth rates by 0.2 percentage points, increasing GDP by 17% by 2100.

As you’re picking between the choices, keep in mind that even if the carbon policy controls U.S. emissions, it is uncertain whether the rest of the world will go along and climate change will stop. Remember, too, that there is huge uncertainty about the specifics of global warming. The carbon emissions policies target the worst-case scenario. The GDP growth policy, on the other hand, doesn’t depend on the rest of the world and the benefits are guaranteed by the simple mathematics of compound growth.

Don’t try to waive off this choice by saying you want to do it all. We all want many things, but what we can have is bounded by our scarce resources. This particular group of distinguished economists has—quite deservedly—an impressive stock of political capital and prestige. But political capital and prestige are two such scarce resources. Why target carbon taxes rather than growth-enhancing tax reform?

Climate lemmings sticking together.

CONCLUSION

Let’s end where we started, with a call for a conversation. These five questions aren’t intended as some snarky put-down of a silly economic proposal. No good economist—and certainly none of the 3,508 who signed the Statement—should feel disrespectfully challenged by these questions. There are intelligent responses to each of these questions. And at the end of the discussion, we should all have a better idea about whether the answers are good enough to go ahead with the tax.

Trump Just Cured Health Care (No One Noticed)

The Editorial Board of Issues and Insights tells the story ignored by others in their article Trump Just Revolutionized Health Care — And Nobody Noticed. Excerpts in italics with my bolds.

Few have ever heard of “Health Reimbursement Accounts,” but they could fundamentally change the nation’s health care system — for the better — and destroy the Democrats’ case for socialized health care.

Late last week, the Trump administration finalized rules that will let companies put money into tax-exempt HRAs that their employees could then use to buy an individual insurance plan on their own. Seems like no big deal, right? Except it will start to unravel a 77-year-old policy mistake that is largely responsible for many of the problems the health care system suffers today.

Back in 1942, the Roosevelt administration imposed wage and price controls on the economy. But it exempted employer-provided benefits like health insurance, and the IRS later decreed that these benefits wouldn’t be taxed as income.

The result was to massively tilt the health insurance playing field toward employer-provided insurance. Today 88% of those with private insurance get it at work.

The massive tax subsidy — now valued at more than $300 billion — also encouraged overly generous health plans, because any health care paid by insurers was tax exempt, while out of pocket spending had to come from after-tax dollars.

So not only did this Roosevelt-era mistake create an employer-dominated health insurance market, it made consumers largely indifferent to the cost of care, since the vast bulk of it was picked up by a third party.

But while health care experts across the political spectrum recognize this mistake, Democrats’ response has been to get the government even more involved in health care, with the latest proposal a total government takeover under the guise of “Medicare for All.”

Republicans, to their credit, have been pushing in the opposite direction. The introduction of Health Savings Accounts — a GOP reform idea Democrats fiercely opposed — 14 years ago helped to remedy one of the tax distortions, by allowing some people to pay out of pocket costs with pre-tax money.

Even with all the restrictions Congress put on HSAs, the market for high-deductible HSA plans exploded — climbing from nothing in 2005 to nearly 30% of the employer market today. By the end of last years, consumers had saved up $10 billion in these accounts.

The rise in these “consumer directed” plans was at least partially responsible for the slow-down in health spending in recent years, according to official government reports, as consumers increasingly started shopping around.

Trump’s HRA rules will have a far more profound impact.

Under the plan, employers will be able to fund tax-free Health Reimbursement Accounts for their workers, who can then use the money to buy an individual insurance plan — thereby taking another step toward fixing the 77-year-old tax distortion. The rule also lets employers fund a different account to buy cheaper “short-term” plans.

This subtle, technical tweak has the potential to revolutionize the private health insurance market,” wrote Avik Roy, one of the smartest health care experts around, in the Washington Post.

The administration figures that 800,000 employers will eventually move to HRA plans, and 11 million workers will get their benefits this way.

At the same time, Trump also loosened the federal rules that had needlessly impeded “association health plans.” These are plans that let members of various groups band together to buy insurance. The result will be more competition, and more affordable choices for millions of people.

The Democrats’ response? Attack these changes as another attempt by Trump to “sabotage” Obamacare. What they really fear, however, is that the two new rules will destroy their case for socialized medicine.

As Roy put: “Together, over time, these changes would give workers more transparency into — and more control over — the health-care dollars that are now spent by other people on their behalf. That transparency and control, in turn, would create a powerful market incentive for health-care payers and providers to lower prices and increase quality.”

Once that happens, the last thing these millions of newly empowered health care shoppers will want is to be shuffled into a one-size-fits-all government plan designed for the masses by socialists like Bernie Sanders.

H20 the Gorilla Climate Molecule

In climate discussions, someone is bound to say: Climate is a lot more than temperatures. And of course, they are right. So let’s consider the other major determinant of climate, precipitation.

The chart above is actually a screen capture of real-time measurements of precipitable water in the atmosphere.  The 24-hour animation can be accessed at MIMIC-TPW ver.2 .  H/T Ireneusz Palmowski, who commented:  “I do not understand why scientists deal with anthropogenic CO2, although the entire convection in the troposphere is driven by water vapor (and ozone in high latitudes).”

These images show that H2O is driving the heat engine through its phase changes (liquid to vapor to liquid (and sometimes ice crystals as well).  And as far as radiative heat transfer is concerned, 95% of it is done by water molecules.  Below is an essay going into the dynamics of precipitation, its variability over the earth’s surface, and its role in determining regional, and even microclimates.  The post was originally titled “Here Comes the Rain Again, inspired by the Eurythmics classic song

The global story on rain is straightforward:

“Precipitation is a major component of the water cycle, and is responsible for depositing the fresh water on the planet. Approximately 505,000 cubic kilometres (121,000 cu mi) of water falls as precipitation each year; 398,000 cubic kilometres (95,000 cu mi) of it over the oceans. Given the Earth’s surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in). Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes.

http://en.wikipedia.org/wiki/Precipitation_(meteorology)

Globally, average precipitation can vary from +/-5% yearly, but there is no particular trend in the history of observations. But rain is one of those things where averages don’t tell you much. For starters, look at where it’s coming down:

So about 1 meter a year is the nominal average of all rain over all surfaces. Some places get up to 10 meters of rain (about 400 inches ) and others get near none. 47% of the earth is considered dryland, defined as anyplace where the rate of evaporation/transpiration exceeds the rate of precipitation. A desert is defined as a dryland with less than 25 cm of precipitation. In the image above, polar deserts are remarkably defined. It just does not have much hope of precipitation as there is little heat to move the water. More heat in, more water movement. Less heat in, less water movement.

Then there’s the seasonal patterns. The band of maximum rains moves with the sun: More north in June, more south in December. More sun, more heating, more rain. Movement in sync with the sun, little time delay. Equatorial max solar heat has max rains. Polar zones minimal heating, minimal precipitation. It’s a very tightly coupled system with low time lags.

The other obvious thing is how central land areas get dry desert conditions if they are not in the equatorial band nor near a warm water current. Brazil, in particular, benefits from warm coastal waters and near equatorial rains. The Gulf Stream rescues Europe from a much drier climate, but I fear the Gulf Stream shifting of zones also puts parts of Saharan Africa out of the equatorial wet. (In some times during history it DOES get a load of water, though…)
From E.M. Smith
https://chiefio.wordpress.com/2011/11/01/what-does-precipitation-say-about-heat-flow/

How do Oceans Make Rain

Here I am taking direction from A. M Makarieva and her colleagues. She explains:

“Water vapor originating by evaporation sustained by solar radiation represents a source of ordered potential energy that is available for generation of atmospheric circulation, including the biotic pump. We will further consider details of this process.

As we can see, early in its life the cloud expands in all directions, meanwhile the air continues to converge towards the (growing) condensation area. This process is at the core of condensation-induced dynamics: as condensation occurs and local pressure drops, this initiates convergence and ascent. They, in their turn, feedback positively on condensation intensity, such that the air pressure lowers further, convergence becomes more extensive and so on — as long as there is enough water vapor around to feed the process.

And where does the water vapor come from? Ocean evaporation, 87%, Plant transpiration 10% , Other evaporation, lakes, rivers, etc. 3%.

Air circulation without condensation (A) and with condensation (B). Gray squares are the air volumes, which in case (B) contain water vapor shown by small blue squares inside gray ones. White squares indicate those air volumes that have lost their water vapor owing to condensation. Blue arrows at the Earth’s surface represent evaporation that replenishes the store of water vapor in the circulating air.

On Fig. B we can see a circulation accompanied by water vapor condensation (water vapor is shown by blue squares). At a certain height water vapor condenses leaving the gaseous phase, while the remaining air continues to circulate deprived of water vapor (this depletion is shown by empty white squares): it first rises and then descends. As one can see, in such a circulation total mass of the rising air would be larger than total mass of the descending air (cf. an escalator transporting people up). The motor driving such a circulation would not only have to compensate the friction losses, but also have to work against gravity that is acting on the ascending air.

One can see from Fig. B that the difference between the cumulative masses of the ascending and descending air parcels grows with increasing height where condensation occurs. This difference also grows with increasing amount of water vapor in the air (i.e. with increasing size of the blue squares). The dynamic power of condensation, on the other hand, is also proportional to the amount of water vapor, but it is practically independent of condensation height.

Condensation height (a proxy for precipitation pathlength) grows with increasing temperature of the Earth’s surface. It is shown in the paper that power losses associated with precipitation of condensate particles become equal to the total dynamic power of condensation at surface temperatures around 50 degrees Celsius. Since the observed power of condensation-driven winds is equal to the total dynamic power of condensation (the “motor”) minus the power spent on compensating precipitation, at such temperatures the observed circulation power becomes zero and the circulation must stop. For commonly observed values of surface temperature these losses do no exceed 40% of condensation power and cannot arrest the condensation-induced circulation. Over 60% of condensation power is spent on friction at the Earth’s surface.

Why Some Places Get More Rain Than Others

This figure shows the “tug-of-war” between the forest and the ocean for the right to become a predominant condensation zone. In Fig. a: on average the Amazon and Congo forests win this war: annual precipitation over forests is two to three times larger than the precipitation over the Atlantic Ocean at the same latitude. Note the logarithmic scale on the vertical axis: “1” means that the land/ocean precipitation ratio is equal to e = 2.718, “2” means it is equal to e2 ≈ 7.4; “0” means that this ratio is unity (equal precipitation on land and the ocean); “-1” means this ratio is 1/e ≈ 0.4; and so on.

In Fig. b: the Eurasian biotic pump. In winter the forest sleeps, so the ocean wins, and all moisture remains over the ocean and precipitates there. In summer, when trees are active, moisture is taken from the ocean and distributed regularly over seven thousand kilometers. The forest wins! (compare the red and black lines) As a result, precipitation over the ocean in summer is lower than it is in winter, despite the temperature in summer is higher.

Finally, in panel (c): an unforested Australia. One can often hear that Australia is so dry because it is situated in the descending branch of the Hadley cell. But this figure shows that such an interpretation does not hold. Both in wet and dry seasons precipitation over Australia is four to six times lower than over the ocean. There is no biotic pump there. Being unforested, oceanic moisture cannot penetrate to the Australian continent irrespective of how much moisture there is over the ocean; during the wet season it precipitates in the coastal zones causing floods. Gradually restoring natural forests in Australia from coast to interior will recover the hydrological cycle on the continent.

http://www.bioticregulation.ru/pump/pump9.php

biotic pump

The Biotic Pump A. M Makarieva et al

Water cycle on land owes itself to the atmospheric moisture transport from the ocean. Properties of the aerial rivers that ensure the “run-in” of water vapor inland to compensate for the gravitational “run-off” of liquid water from land to the ocean are of direct relevance for the regional water availability. The biotic pump concept clarifies why the moist aerial rivers flow readily from ocean to land when the latter gives home to a large forest — and why they are reluctant to do so when the forest is absent.

While it is increasingly common to blame global change for any regional water cycle disruption, the biotic pump evidence suggests that the burden of responsibility rather rests with the regional land use practices. On large areas on both sides of the Atlantic Ocean, temperate and boreal forests are intensely harvested for timber and biofuel. These forests are artificially maintained in the early successional stages and are never allowed to recover to the natural climax state. The water regulation potential of such forests is low, while their susceptibility to fires and pests is high.

https://2s3c.wordpress.com/2012/04/22/taac/

Conclusion

So the oceans make rain, and together with the forests the land receives its necessary fresh water. There is a threat from human activity, but it has nothing to do with CO2. Land use practices leading to deforestation have the potential to disrupt this process. Without trees attracting the moist air from the ocean there is desert.

Ironically, modern societies burn fossil fuels instead of burning the forests as in the past.

For more on climate science related to H2O, see Bill Gray: H20 is Climate Control Knob, not CO2