Beliefs and Uncertainty: A Bayesian Primer

Those who follow discussions regarding Global Warming and Climate Change have heard from time to time about the Bayes Theorem. And Bayes is quite topical in many aspects of modern society:

Bayesian statistics “are rippling through everything from physics to cancer research, ecology to psychology,” The New York Times reports. Physicists have proposed Bayesian interpretations of quantum mechanics and Bayesian defenses of string and multiverse theories. Philosophers assert that science as a whole can be viewed as a Bayesian process, and that Bayes can distinguish science from pseudoscience more precisely than falsification, the method popularized by Karl Popper.

Named after its inventor, the 18th-century Presbyterian minister Thomas Bayes, Bayes’ theorem is a method for calculating the validity of beliefs (hypotheses, claims, propositions) based on the best available evidence (observations, data, information). Here’s the most dumbed-down description: Initial belief plus new evidence = new and improved belief.   (A fuller and more technical description is below for the more mathematically inclined.)

Now that doesn’t sound so special, but in fact as you will see below, our intuition about probabilities is often misleading. Consider the classic Monty Hall Problem.

The Monty Hall Game is a counter-intuitive statistics puzzle:

There are 3 doors, behind which are two goats and a car.
You pick a door (call it door A). You’re hoping for the car of course.
Monty Hall, the game show host, examines the other doors (B & C) and always opens one of them with a goat (Both doors might have goats; he’ll randomly pick one to open)
Here’s the game: Do you stick with door A (original guess) or switch to the other unopened door? Does it matter?

Surprisingly, the odds aren’t 50-50. If you switch doors you’ll win 2/3 of the time!

Don’t believe it? There’s a Monty Hall game (here) where you can prove it to yourself by experience that your success doubles when you change your choice after Monty eliminates one of the doors. Run the game 100 times either keeping your choice or changing it, and see the result.

The game is really about re-evaluating your decisions as new information emerges. There’s another example regarding race horses here.

The Principle Underlying Bayes Theorem

Like any tool, Bayes method of inference is a two-edged sword, explored in an article by John Horgon in Scientific American (here):
“Bayes’s Theorem: What’s the Big Deal?
Bayes’s theorem, touted as a powerful method for generating knowledge, can also be used to promote superstition and pseudoscience”

Here is my more general statement of that principle: The plausibility of your belief depends on the degree to which your belief–and only your belief–explains the evidence for it. The more alternative explanations there are for the evidence, the less plausible your belief is. That, to me, is the essence of Bayes’ theorem.

“Alternative explanations” can encompass many things. Your evidence might be erroneous, skewed by a malfunctioning instrument, faulty analysis, confirmation bias, even fraud. Your evidence might be sound but explicable by many beliefs, or hypotheses, other than yours.

In other words, there’s nothing magical about Bayes’ theorem. It boils down to the truism that your belief is only as valid as its evidence. If you have good evidence, Bayes’ theorem can yield good results. If your evidence is flimsy, Bayes’ theorem won’t be of much use. Garbage in, garbage out.

Embedded in Bayes’ theorem is a moral message: If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe. Scientists often fail to heed this dictum, which helps explains why so many scientific claims turn out to be erroneous. Bayesians claim that their methods can help scientists overcome confirmation bias and produce more reliable results, but I have my doubts.

Horgon’s statement comes very close to the legal test articulated by Bradford Hill and widely used by courts to determine causation of liability in relation to products, medical treatments or working conditions.

By way of context Bradford Hill says this:

None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Such is the legal terminology for the “null” hypothesis: As long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven.  For more see the post: Claim: Fossil Fuels Cause Global Warming

Limitations of Bayesian Statistics

From the above it should be clear that Bayesian inferences can be drawn when there are definite outcomes of interest and historical evidence of conditions that are predictive of one outcome or another. For example, my home weather sensor from Oregon Scientific predicts rain whenever air pressure drops significantly because that forecast will be accurate 75% of the time, based on that one condition. The Weather Network will add several other variables and will increase the probability, though maybe not always in predicting the outcomes in my backyard.

When it comes to the response of GMT (Global Mean Temperatures) to increasing CO2 concentrations, or many other climate concerns, we currently lack the historical probabilities because we have yet to untangle the long-term secular trends from the noise of ongoing, normal and natural variability.

Andrew Gelman writes on Bayes statistical methods and says this:

In short, I think Bayesian methods are a great way to do inference within a model, but not in general a good way to assess the probability that a model or hypothesis is true (indeed, I think ‘the probability that a model or a hypothesis is true’ is generally a meaningless statement except as noted in certain narrow albeit important examples).

A Fuller (more technical) Description of Bayes Theorem

The probability that a belief is true given new evidence
equals
the probability that the belief is true regardless of that evidence
times
the probability that the evidence is true given that the belief is true
divided by
the probability that the evidence is true regardless of whether the belief is true.
Got that?

The basic mathematical formula takes this form: P(B|E) = P(B) * P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.

lesson-72-bayesian-network-classifiers-6-638

The application above shows some important facts to remember about Beliefs and Uncertainties:

Tests are not the event. We have a cancer test, separate from the event of actually having cancer. We have a test for spam, separate from the event of actually having a spam message.

Tests are flawed. Tests detect things that don’t exist (false positive), and miss things that do exist (false negative).

Tests give us test probabilities, not the real probabilities. People often consider the test results directly, without considering the errors in the tests.

False positives skew results. Suppose you are searching for something really rare (1 in a million). Even with a good test, it’s likely that a positive result is really a false positive on somebody in the 999,999.

Cool: Quebec teen Studies Stars, Discovers Ancient Mayan City

 

Mouth of Fire

William Gadoury is a 15-year-old student from Saint-Jean-de-Matha in Lanaudière, Quebec. The precocious teen has been fascinated by all things Mayan for several years, devouring any information he could find on the topic.

During his research, Gadoury examined 22 Mayan constellations and discovered that if he projected those constellations onto a map, the shapes corresponded perfectly with the locations of 117 Mayan cities. Incredibly, the 15-year-old was the first person to establish this important correlation.

Then Gadoury took it one step further. He examined a twenty-third constellation which contained three stars, yet only two corresponded to known cities.

Gadoury’s hypothesis? There had to be a city in the place where that third star fell on the map.

Satellite images later confirmed that, indeed, geometric shapes visible from above imply that an ancient city with a large pyramid and thirty buildings stands exactly where Gadoury said they would be. If the find is confirmed, it would be the fourth largest Mayan city in existence.

Once Gadoury had established where he thought the city should be, the young man reached out to the Canadian Space Agency where staff was able to obtain satellites through NASA and JAXA, the Japanese space agency.

“What makes William’s project fascinating is the depth of his research,” said Canadian Space Agency liaison officer Daniel de Lisle. “Linking the positions of stars to the location of a lost city along with the use of satellite images on a tiny territory to identify the remains buried under dense vegetation is quite exceptional.”

Gadoury has decided to name the city K’ÀAK ‘CHI, a Mayan phrase which in English means “Mouth of Fire.”

Gadoury

Summary

Now that is the way you do science: Find a correlation, form a theory explaining it, make a prediction, and verify it in the real world.  The preliminary confirmation is by remote sensing with satellite images showing the geometrical shapes.

“I did not understand why the Maya built their cities away from rivers, on marginal lands, and in the mountains. They had to have another reason, and as they worshiped the stars, the idea came to me to verify my hypothesis,” Gadoury told Le Journal de Montreal.

“I was really surprised and excited when I realized that the most brilliant stars of the constellations matched the largest Maya cities,” he added.

The next step for Gadoury will be seeing the city in person. He’s already presented his findings to two Mexican archaeologists, and has been promised that he’ll join expeditions to the area.

What a delightful young scientist and a wonderful achievement.

Sources: Earth Mystery NewsEpoch Times.

Head, Heart and Science

A man who has not been a socialist before 25 has no heart. If he remains one after 25 he has no head.—King Oscar II of Sweden

Recently I had an interchange with a friend from high school days, and he got quite upset with this video by Richard Lindzen. So much so, that he looked up attack pieces in order to dismiss Lindzen as a source.  This experience impressed some things upon me.

Climate Change is Now Mostly a Political Football (at least in USA)

My friend attributed his ill humor to the current political environment. He readily bought into slanderous claims, and references to being bought and paid for by the Koch brothers. At this point, Bernie and Hilliary only disagree about who is the truest believer in Global Warming. Once we get into the general election process, “Fighting Climate Change” will intensify as a wedge issue, wielded by smug righteous believers on the left against the anti-science neanderthals on the right.

So it is a hot label for social-media driven types to identify who is in the tribe (who can be trusted) and the others who can not.  For many, it is not any deeper than that.

The Warming Consensus is a Timesaver

My friend acknowledged that his mind was made up on the issue because 95+% of scientists agreed. It was extremely important for him to discredit Lindzen as untrustworthy to maintain the unanimity. When a Warmist uses: “The Scientists say: ______” , it is much the same as a Christian reference: “The Bible says: _______.” In both cases, you can fill in the blank with whatever you like, and attribute your idea to the Authority. And most importantly, you can keep the issue safely parked in a No Thinking Zone. There are plenty of confusing things going on around us, and no one wants one more ambiguity requiring time and energy.

Science Could Lose the Delicate Balance Between Head and Heart

Decades ago Arthur Eddington wrote about the tension between attitudes of artists and scientists in their regarding nature. On the one hand are people filled with the human impulse to respect, adore and celebrate the beauty of life and the world. On the other are people driven by the equally human need to analyze, understand and know what to expect from the world. These are Yin and Yang, not mutually exclusive, and all of us have some of each.

Most of us can recall the visceral response in the high school biology lab when assigned to dissect a frog. Later on, crayfish were preferred (less disturbing to artistic sensibilities). For all I know, recent generations have been spared this right of passage, to their detriment. For in the conflict between appreciating things as they are, and the need to know why and how they are, we are exposed to deeper reaches of the human experience. If you have ever witnessed, as I have, a human body laid open on an autopsy table, then you know what I mean.

Anyone, scientist or artist, can find awe in contemplating the mysteries of life. There was a time when it was feared that the march of science was so advancing the boundaries of knowledge that the shrinking domain of the unexplained left ever less room for God and religion. Practicing scientists knew better. Knowing more leads to discovering more unknowns; answers produce cascades of new questions. The mystery abounds, and the discovery continues. Eddington:

It is pertinent to remember that the concept of substance has disappeared from fundamental physics; what we ultimately come down to is form. Waves! Waves!! Waves!!! Or for a change — if we turn to relativity theory — curvature! Energy which, since it is conserved, might be looked upon as the modern successor of substance, is in relativity theory a curvature of space-time, and in quantum theory a periodicity of waves. I do not suggest that either the curvature or the waves are to be taken in a literal objective sense; but the two great theories, in their efforts to reduce what is known about energy to a comprehensible picture, both find what they require in a conception of “form”.

What do we really observe? Relativity theory has returned one answer — we only observe relations. Quantum theory returns another answer — we only observe probabilities.

It is impossible to trap modern physics into predicting anything with perfect determinism because it deals with probabilities from the outset.
― Arthur Stanley Eddington

Works by Eddington on Science and the Natural World are here.

Summary

The science problem today is not the scientists themselves, but with those attempting to halt its progress for the sake of political power and wealth.

Eddington:
Religious creeds are a great obstacle to any full sympathy between the outlook of the scientist and the outlook which religion is so often supposed to require … The spirit of seeking which animates us refuses to regard any kind of creed as its goal. It would be a shock to come across a university where it was the practice of the students to recite adherence to Newton’s laws of motion, to Maxwell’s equations and to the electromagnetic theory of light. We should not deplore it the less if our own pet theory happened to be included, or if the list were brought up to date every few years. We should say that the students cannot possibly realise the intention of scientific training if they are taught to look on these results as things to be recited and subscribed to. Science may fall short of its ideal, and although the peril scarcely takes this extreme form, it is not always easy, particularly in popular science, to maintain our stand against creed and dogma.
― Arthur Stanley Eddington

But enough about science. It’s politicians we need to worry about:

Footnote:

“Asked in 1919 whether it was true that only three people in the world understood the theory of general relativity, [Eddington] allegedly replied: ‘Who’s the third?”

The Climate Fuss

Definition Fussbudget:  A person who fusses over trifles. Also called fusspot.

In the last week, Richard Lindzen has a new youtube video which explains briefly and clearly who is making a fuss about climate and why. (Hint: It is not primarily from climate physicists). The video deserves to go viral, since the presentation is brief, informative and accessible to anyone.

Why are so many people worried, indeed panic-stricken about this issue?
It’s due not so much to climate physicists, as to politicians, environmentalists and media. Global warming alarmism provides the things they most want:

  • For politicians, it’s money and power;
  • For environmentalists, it’s money for their organizations and confirmation of their near-religious devotion to the idea that man is a destructive force acting upon nature;
  • For the media, it’s ideology, money and headlines.
  • And crony capitalists have eagerly grabbed for the subsidies that governments have so lavishly provided.

Beyond the Lindzen video, we can also include lawyers who want to make money off the climate fuss. The legal attack on Big Oil (ExxonMobil, and others to follow) is modeled after the Big Tobacco campaign, and is intended to apply a defacto tax on consumers of fossil fuels. That’s what happened with the settlements by Tobacco companies, with an immediate jump in cigarette prices.

Not only is this an end run around the legislative branch, a lot of the money will be paid to lawyers working on contingency for the state AGs who are prosecuting this. To activists, lining the pockets of class-action lawyers is not a problem, as long as fossil fuels cost more, and thus will be used less. The tactic and the players are exposed here:
The Climate Change Inquisition, Part II—The Scandal Unfolds
Margaret A. Little April 20, 2016

Summary: How can we let the hot air out of the Global Warming balloon when they have so much to gain by inflating it?

 

Climate Change is a Social Science

The post What is Climate? Is it changing? explained how “Climate Change” is a double abstraction: it refers to the derivative (change) in our expectations (patterns) of weather. Thus studies of “Climate Change” are a branch of social science, not physical science.

For example, here is a typical study, without the pretense or claim to be doing physical science.

Extreme weather perceptions in your neighbourhood and beyond, Published in Environmental Sociology, by Dr Matthew J. Cutler of the University of New Hampshire, USA. (here)

The author of the study, Dr Cutler, found that although higher household earnings were negatively associated with perceptions of extreme weather, homeownership was indeed a contributing factor – stating that “homeownership and lower incomes appear to independently increase perceptions.” Age, gender, education and political persuasion were also significantly related to extreme weather perceptions. Odds were higher among younger, female, more educated, and Democratic respondents to perceive effects from extreme weather than older, male, less educated, and Republican respondents.

Summary:

Climate Science is properly identified as a branch of Environmental Sociology. Its focus on “Climate Change” aims to understand how and why people perceive weather patterns to be changing or dangerous.

For the sake of human health and prosperity, all studies pertaining to Climate Change should be appreciated as social science investigations, having nothing to do with natural science or physics. Needless to say, any public policy proposals regarding Climate Change can not be evaluated as having any beneficial effect upon the physical world. They are solely motivated by social perceptions and concerns, and should be assessed on the costs and impacts required to reduce levels of concern.

Climate Science Culture War

Climate Science Culture War

 

Climate Peer Pressure

Jo Nova has a great post pushing back on claims women care more about climate change than men do. Money quote:

“It’s not a sign that they “care more than men”. It’s a sign that they are less willing to take risks and speak against the current predominant meme. Who wants to be called a “denier”? As I’ve been pointing out for years, the reason there are not more women on the skeptical sensible side of the climate debate is because the belief is maintained with social pressure, not logic and reason. Where is the open debate and polite discussion that would allow women to be more involved in the decisions? Manners everyone — they were invented for a reason.”

http://joannenova.com.au/2016/03/heres-why-women-are-more-likely-to-care-about-the-climate/

The Space Frontier Gets Closer

Artistic rendition of a laser driven spacecraft. Pictorial only.

Some NASA people are thinking out the box and see a new way to explore space using laser technology.

So far, NASA spacecraft have made 13 trips to Mars, with seven landings. The most recent — that of the Curiosity rover — took 253 days from launch on Earth to touchdown on Mars. There’s now reason to believe, however, that this journey could be significantly made faster to the point it only takes 3 days, according to a NASA researcher.

This could be possible using a ‘photonic propulsion’ system, says NASA scientist Philip Lubin. A massive laser based on Earth would fire bursts of photons into the ‘sail’ of the spacecraft and accelerated it up to 26% the speed of light (1/4c), which is unheard of in space flight. But that’s only for a tiny object with a 1 meter solar sail. Larger, more practical crafts, would be accelerated to between 1-3% the speed of light, which is still fantastic.

The Abstract:

In the nearly 60 years of spaceflight we have accomplished wonderful feats of exploration and shown the incredible spirit of the human drive to explore and understand our universe. Yet in those 60 years we have barely left our solar system with the Voyager 1 spacecraft launched in 1977 finally leaving the solar system after 37 years of flight at a speed of 17 km/s or less than 0.006% the speed of light.

As remarkable as this we will never reach even the nearest stars with our current propulsion technology in even 10 millennium. We have to radically rethink our strategy or give up our dreams of reaching the stars, or wait for technology that does not exist. While we all dream of human spaceflight to the stars in a way romanticized in books and movies, it is not within our power to do so, nor it is clear that this is the path we should choose.

We posit a technological path forward, that while not simple, it is within our technological reach. We propose a roadmap to a program that will lead to sending relativistic probes to the nearest stars and will open up a vast array of possibilities of flight both within our solar system and far beyond. Spacecraft from gram level complete spacecraft on a wafer (“wafersats”) that reach more than ¼ c and reach the nearest star in 15 years to spacecraft with masses more than 105 kg (100 tons) that can reach speeds of greater than 1000 km/s. These systems can be propelled to speeds currently unimaginable with existing propulsion technologies.

To do so requires a fundamental change in our thinking of both propulsion and in many cases what a spacecraft is. In addition to larger spacecraft, some capable of transporting humans, we consider functional spacecraft on a wafer, including integrated optical communications, optical systems and sensors combined with directed energy propulsion. Since “at home” the costs can be amortized over a very large number of missions.

In addition the same photon driver can be used for planetary defense, beamed energy for distant spacecraft as well as sending power back to Earth as needed, stand-off composition analysis, long range laser communications, SETI searches and even terra forming. The human factor of exploring the nearest stars and exo-planets would be a profound voyage for humanity, one whose non-scientific implications would be enormous. It is time to begin this inevitable journey beyond our home.

Stellar environment of the Sun, maximum distance = 15 Light Years

Source of image: J.-M. Malherbe, Observatoire de Paris

This path opens up our stellar environment, shown above with solar systems within 15 light years of us.  Here is the Roadmap to Interstellar Flight

One of my readers asked if this is for real.  Yes, there are serious people working on this, and Japan has already successfully deployed a solar sail on the JAXA mission to Venus and beyond (here). Actual photo from space below:

Could this be the exploration technology of the future?

090812-solar-sailart-02

 

 

X-Weathermen are Back!

The media is again awash with claims of “human footprints” in extreme weather events, with headlines like these:

“Global warming is making hot days hotter, rainfall and flooding heavier, hurricanes stronger and droughts more severe.”

“Global climate change is making weather worse over time”

“Climate change link to extreme weather easier to gauge”– U.S. Report

“Heat Waves, Droughts and Heavy Rain Have Clear Links to Climate Change, Says National Academies”

That last one refers to a paper just released by the National Academy of Sciences Press: Attribution of Extreme Weather Events in the Context of Climate Change (2016)

And as usual, the headline claims are unsupported by the actual text. From the NAS report (here):

Attribution studies of individual events should not be used to draw general conclusions about the impact of climate change on extreme events as a whole. Events that have been selected for attribution studies to date are not a representative sample (e.g., events affecting areas with high population and extensive infrastructure will attract the greatest demand for information from stakeholders) P 107

Systematic criteria for selecting events to be analyzed would minimize selection bias and permit systematic evaluation of event attribution performance, which is important for enhancing confidence in attribution results. Studies of a representative sample of extreme events would allow stakeholders to use such studies as a tool for understanding how individual events fit into the broader picture of climate change. P 110

Correctly done, attribution of extreme weather events can provide an additional line of evidence that demonstrates the changing climate, and its impacts and consequences. An accurate scientific understanding of extreme weather event attribution can be an additional piece of evidence needed to inform decisions on climate changerelated actions. P. 112

The Indicative Without the Imperative

extreme-weather-events

 

The antidote to such feverish reporting is provided by Mike Hulme in a publication: Attributing Weather Extremes to ‘Climate Change’: a Review (here).

He has an insider’s perspective on this issue, and is certainly among the committed on global warming (color him concerned). Yet here he writes objectively to inform us on X-weather, without advocacy: real science journalism and a public service, really.

Overview

In this third and final review I survey the nascent science of extreme weather event attribution. The article proceeds by examining the field in four stages: motivations for extreme weather attribution, methods of attribution, some example case studies and the politics of weather event Attribution.

The X-Weather Issue

As many climate scientists can attest, following the latest meteorological extreme one of the most frequent questions asked by media journalists and other interested parties is: ‘Was this weather event caused by climate change?’

In recent decades the meaning of climate change in popular western discourse has changed from being a descriptive index of a change in climate (as in ‘evidence that a climatic change has occurred’) to becoming an independent causative agent (as in ‘climate change caused this event to happen’). Rather than being a descriptive outcome of a chain of causal events affecting how weather is generated, climate change has been granted power to change worlds: political and social worlds as much as physical and ecological ones.

To be more precise then, what people mean when they ask the ‘extreme weather blame’ question is: ‘Was this particular weather event caused by greenhouse gases emitted from human activities and/or by other human perturbations to the environment?’ In other words, can this meteorological event be attributed to human agency as opposed to some other form of agency?

The Motivations

Hulme shows what drives scientists to pursue the “extreme weather blame” question, noting four motivational factors.

Why have climate scientists over the last ten years embarked upon research to provide an answer beyond the stock phrase ‘no individual weather event can directly be attributed to greenhouse gas emissions’?  There seem to be four possible motives.

1.Curiosity
The first is because the question piques the scientific mind; it acts as a spur to develop new rational understanding of physical processes and new analytic methods for studying them.

2.Adaptation
A second argument, put forward by some, is that it is important to know whether or not specific instances of extreme weather are human-caused in order to improve the justification, planning and execution of climate adaptation.

3.Liability
A third argument for pursuing an answer to the ‘extreme weather blame’ question is inspired by the possibility of pursuing legal liability for damages caused. . . If specific loss and damage from extreme weather can be attributed to greenhouse gas emissions – even if expressed in terms of increased risk rather than deterministically – then lawyers might get interested.

The liability motivation for research into weather event attribution also bisects the new international political agenda of ‘loss and damage’ which has emerged in the last two years. . . The basic idea is to give recognition that loss and damage caused by climate change is legitimate ground for less developed countries to gain access to new international climate adaptation funds.

4. Persuasion
A final reason for scientists to be investing in this area of climate science – a reason stated explicitly less often than the ones above and yet one which underlies much of the public interest in the ‘extreme weather blame’ question – is frustration with and argument about the invisibility of climate change. . . If this is believed to be true – that only scientists can make climate change visible and real –then there is extra onus on scientists to answer the ‘extreme weather blame’ question as part of an effort to convince citizens of the reality of human-caused climate change.

Attribution Methods

Attributing extreme weather events to human influences requires different approaches, of which four broad categories can be identified.

1. Physical Reasoning
The first and most general approach to attributing extreme weather phenomena to rising greenhouse gas concentrations is to use simple physical reasoning.

General physical reasoning can only lead to broad qualitative statements such as ‘this extreme weather is consistent with’ what is known about the human-enhanced greenhouse effect. Such statements offer neither deterministic nor stochastic answers and clearly underdetermine the ‘weather blame question.’ It has given rise to a number of analogies to try to communicate the non-deterministic nature of extreme event attribution. The three most widely used ones concern a loaded die (the chance of rolling a ‘6’ has increased, but no single ‘6’ can be attributed to the biased die), the baseball player on steroids (the number of home runs hit increases, but no single home run can be attributed to the steroids) and the speeding car-driver (the chance of an accident increases in dangerous conditions, but no specific accident can be attributed to the fast-driving).

2. Classical Statistical Analysis
A second approach is to use classical statistical analysis of meteorological time series data to determine whether a particular weather (or climatic) extreme falls outside the range of what a ‘normal’ unperturbed climate might have delivered.

All such extreme event analyses of meteorological time series are at best able to detect outliers, but can never be decisive about possible cause(s). A different time series approach therefore combines observational data with model simulations and seeks to determine whether trends in extreme weather predicted by climate models have been observed in meteorological statistics (e.g. Zwiers et al., 2011, for temperature extremes and Min et al., 2011, for precipitation extremes). This approach is able to attribute statistically a trend in extreme weather to human influence, but not a specific weather event. Again, the ‘weather blame question’ remains underdetermined.

slide20

3. Fractional Attributable Risk (FAR)
Taking inspiration from the field of epidemiology, this method seeks to establish the Fractional Attributable Risk (FAR) of an extreme weather (or short-term climate) event. It asks the counterfactual question, ‘How might the risk of a weather event be different in the presence of a specific causal agent in the climate system?’

The single observational record available to us, and which is analysed in the statistical methods described above, is inadequate for this task. The solution is to use multiple model simulations of the climate system, first of all without the forcing agent(s) accused of ‘causing’ the weather event and then again with that external forcing introduced into the model.

The credibility of this method of weather attribution can be no greater than the overall credibility of the climate model(s) used – and may be less, depending on the ability of the model in question to simulate accurately the precise weather event under consideration at a given scale (e.g. a heatwave in continental Europe, a rain event in northern Thailand) (see Christidis et al., 2013a).

4. Eco-systems Philosophy
A fourth, more philosophical, approach to weather event attribution should also be mentioned. This is the argument that since human influences on the climate system as a whole are now clearly established – through changing atmospheric composition, altered land surface characteristics, and so on – there can no longer be such a thing as a purely natural weather event. All weather — whether it be a raging tempest or a still summer afternoon — is now attributable to human influence, at least to some extent. Weather is the local and momentary expression of a complex system whose functioning as a system is now different to what it would otherwise have been had humans not been active.

Results from Weather Attribution Studies

Hulme provides a table of numerous such studies using various methods, along with his view of the findings.

It is likely that attribution of temperature-related extremes using FAR methods will always be more attainable than for other meteorological extremes such as rainfall and wind, which climate models generally find harder to simulate faithfully at the spatial scales involved. As discussed below, this limitation on which weather events and in which regions attribution studies can be conducted will place important constraints on any operational extreme weather attribution system.

Political Dimensions of Weather Attribution

Hulme concludes by discussing the political hunger for scientific proof in support of policy actions.

But Hulme et al. (2011) show why such ambitious claims are unlikely to be realised. Investment in climate adaptation, they claim, is most needed “… where vulnerability to meteorological hazard is high, not where meteorological hazards are most attributable to human influence” (p.765). Extreme weather attribution says nothing about how damages are attributable to meteorological hazard as opposed to exposure to risk; it says nothing about the complex political, social and economic structures which mediate physical hazards.

And separating weather into two categories — ‘human-caused’ weather and ‘tough-luck’ weather – raises practical and ethical concerns about any subsequent investment allocation guidelines which excluded the victims of ‘tough-luck weather’ from benefiting from adaptation funds.

Contrary to the claims of some weather attribution scientists, the loss and damage agenda of the UNFCCC, as it is currently emerging, makes no distinction between ‘human-caused’ and ‘tough-luck’ weather. “Loss and damage impacts fall along a continuum, ranging from ‘events’ associated with variability around current climatic norms (e.g., weather-related natural hazards) to [slow-onset] ‘processes’ associated with future anticipated changes in climatic norms” (Warner et al., 2012:21). Although definitions and protocols have not yet been formally ratified, it seems unlikely that there will be a role for the sort of forensic science being offered by extreme weather attribution science.

Conclusion

Thank you Mike Hulme for a sane, balanced and expert analysis. It strikes me as being another element in a “Quiet Storm of Lucidity”.

Is that light the end of the tunnel or an oncoming train?

Claim: Fossil Fuels Cause Global Warming

 

Recently I addressed this claim by referring to this chart produced by scientists from AARI.

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (4T), 1861-2000. The thin dashed line represents annual 4T, the bold line—its 13-year smoothing, and the line constructed from rectangles—WF C (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

 

In their commentary, it is clear why the data does not support claiming fossil fuels cause global warming.  From Frolov et al. 2009:

The WFC curve shows an exponential increase, which doubles approximately every 30 years, increasing 25-fold since the middle of the nineteenth century. The global air temperature anomaly curve shows a positive trend of +0.06°C/10 years (Sonechkin et al., 1997). At the same time, there are cyclic changes with periods of about 60 years. The correlation between these curves changes its sign every 30 years, varying from —0.88 (1940 1970) to +0.94 (1970 2000). Hence, there is no direct linear connection between WFC (which indirectly represents CO2 concentration in the atmosphere) and global air temperature. The authors of this study therefore conclude that the WFC increase is not an obvious cause of the increase in global air temperature.

In this post, I am bringing the analysis up to date by showing World Fossil Fuel Consumption (WFFC) compared to Global Temperature Anomalies.

OLYMPUS DIGITAL CAMERA
The WFFC numbers come from US EIA and can be accessed here. I have included only the statistics for coal, oil and gas, which comprise 91% of total energy consumed. The remainder are hydro, nuclear and other renewables. 2015 numbers are not yet available. UAH version 6 provides global temperature anomalies for the lower troposphere.

The correlation overall is moderately positive at 0.60, but the two patterns are markedly different because of the 1998 event. 1980 to 2000 (the period overlapping with the AARI graph) shows a weakly positive 0.48 correlation. From 2000 to 2014 the correlation is almost non-existent at 0.07.

Summary

In the long-term and in the recent short-term, use of fossil fuels is not the obvious cause of temperature changes. The context and background for reaching this conclusion is provided below (From the previous post.)

Legal Test of Global Warming

In a previous post (here), I discussed the Bradford Hill protocol that has become precedent for trials concerning scientific evidence for legal liability.

Bradford Hill was the jurist who brought clarity and  methodology for the courts to consider and rule on accusations such as:

Thalidimide is causing birth defects;
Asbestos dust is causing lung disease;
as well as frequent claims of causal relationships between illness, injury and conditions of work.

The Global Warming Claim

When it comes to Global Warming, the proposition is straightforward:
Rising fossil fuel emissions are causing rising global temperatures.

The procedure to test that claim is described by Nathan Schachtman here.

Proper epidemiological methodology begins with published study results which demonstrate an association between a drug and an unfortunate effect. Once an association has been found, a judgment as whether a real causal relationship between exposure to a drug and a particular birth defect really exists must be made. 

Step 1: Establish an association between two variables.
Proper epidemiological method requires surveying the pertinent published studies that investigate whether there is an association between the medication use and the claimed harm. The expert witnesses must, however, do more than write a bibliography; they must assess any putative associations for “chance, confounding or bias”:

Step 2: Rule out chance as an explanation
The appropriate and generally accepted methodology for accomplishing this step of evaluating a putative association is to consider whether the association is statistically significant at the conventional level.
“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”

Step 3: Rule out bias or confounding factors.
The studies must be structured to analyze and reject other factors or influences, such as non-random sampling, additional intervening variables such as demographic or socio-economic differences.

Step 4: Infer Causation by Applying Accepted Causative Factors
Most often legal proceedings follow the Bradford Hill factors, which are delineated here.

By way of context Bradford Hill says this:

None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Such is the legal terminology for the “null” hypothesis: As long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven.

The Causative Factors

What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?

(1) Strength. First upon my list I would put the strength of the association.

(2) Consistency: Next on my list of features to be specially considered I would place the consistency of the observed association. Has it been repeatedly observed by different persons, in different places, circumstances and times?

To test the Global Warming claim, let’s consider the association between world fuel consumption (WFC) and surface air temperatures (SAT):

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (4T), 1861-2000. The thin dashed line represents annual 4T, the bold line—its 13-year smoothing, and the line constructed from rectangles—WF C (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009


In Figure 5.1, the dynamics of global air temperature anomalies obtained from instrumental measurements over the last 140 years is compared with changes in world fuel consumption (WFC) (Makarov, 1998). The WFC curve shows an exponential increase, which doubles approximately every 30 years, increasing 25-fold since the middle of the nineteenth century. The global air temperature anomaly curve shows a positive trend of +0.06°C/10 years (Sonechkin et al., 1997). At the same time, there are cyclic changes with periods of about 60 years. The correlation between these curves changes its sign every 30 years, varying from —0.88 (1940 1970) to +0.94 (1970 2000). Hence, there is no direct linear connection between WFC (which indirectly represents CO2 concentration in the atmosphere) and global air temperature. The authors of this study therefore conclude that the WFC increase is not an obvious cause of the increase in global air temperature.

The other causative factors could be applied, but can not add weight against the argument above.

Case Closed

The legal methodology above is used to decide the causal relationship between two variables. Clearly, in Climate Science the starting question is: Do rising fossil fuel emissions cause temperatures to rise? Those who have been following the issue know that there are many arguments underneath: Why do not temperatures always rise along with CO2? Has chance been eliminated? Are not natural factors confounding the association? And so on.

For myself, I will join in the conclusion reached by Frolov et al., who go on to further explain their position:

In general, although climate models are based on physics, they inevitably include a number of adjustable parameters that are fitted to past temperature changes. We are not aware of a single climate model based on fundamental physics without adjustable parameters that has been subjected to a rigorous test against actual climate data. Climate modelers appear to assume that the Earth’s climate would continue without change, were it not for greenhouse gas emissions. They do not take into account the possibility that natural climate cycles are also acting independently of effects induced by buildup of greenhouse gas concentrations. As we have shown in Chapter 4, there is evidence for cyclic variability of Arctic climates. Furthermore, there is considerable evidence for past variability of global climate as expressed in the so-called Medieval Warm Period (900-1100) and the Little Ice Age (1600-1850). These fluctuations appear to be as great as the temperature rise of the 20th century, yet, there was no contribution of greenhouse gases to these climate changes.

A major challenge in climate modeling is to understand the range of natural fluctuations, and separate these from climate changes induced by human activity (greenhouse gas emissions, land clearing, irrigation, …). The models neglect natural fluctuations because they have no means of incorporating them, and put the entire blame for climate changes since the 19th century on human activity. As a result, they appear to project an extreme view of the future that seems unlikely to be reliable.

Again my thanks to Dr. Bernaerts for the copy of this book:

 

 

 

 

 

Legal Test of Global Warming

In a previous post (here), I discussed the Bradford Hill protocol that has become precedent for trials concerning scientific evidence for legal liability.

Bradford Hill was the jurist who brought clarity and  methodology for the courts to consider and rule on accusations such as:

Thalidimide is causing birth defects;
Asbestos dust is causing lung disease;
as well as frequent claims of causal relationships between illness, injury and conditions of work.

The Global Warming Claim

When it comes to Global Warming, the proposition is straightforward:
Rising fossil fuel emissions are causing rising global temperatures.

The procedure to test that claim is described by Nathan Schachtman here.

Proper epidemiological methodology begins with published study results which demonstrate an association between a drug and an unfortunate effect. Once an association has been found, a judgment as whether a real causal relationship between exposure to a drug and a particular birth defect really exists must be made. 

Step 1: Establish an association between two variables.
Proper epidemiological method requires surveying the pertinent published studies that investigate whether there is an association between the medication use and the claimed harm. The expert witnesses must, however, do more than write a bibliography; they must assess any putative associations for “chance, confounding or bias”:

Step 2: Rule out chance as an explanation
The appropriate and generally accepted methodology for accomplishing this step of evaluating a putative association is to consider whether the association is statistically significant at the conventional level.
“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”

Step 3: Rule out bias or confounding factors.
The studies must be structured to analyze and reject other factors or influences, such as non-random sampling, additional intervening variables such as demographic or socio-economic differences.

Step 4: Infer Causation by Applying Accepted Causative Factors
Most often legal proceedings follow the Bradford Hill factors, which are delineated here.

 

By way of context Bradford Hill says this:

None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Such is the legal terminology for the “null” hypothesis: As long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven.

The Causative Factors

What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?

(1) Strength. First upon my list I would put the strength of the association.

(2) Consistency: Next on my list of features to be specially considered I would place the consistency of the observed association. Has it been repeatedly observed by different persons, in different places, circumstances and times?

To test the Global Warming claim, let’s consider the association between world fuel consumption (WFC) and surface air temperatures (SAT):

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (4T), 1861-2000. The thin dashed line represents annual 4T, the bold line—its 13-year smoothing, and the line constructed from rectangles—WF C (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009


In Figure 5.1, the dynamics of global air temperature anomalies obtained from instrumental measurements over the last 140 years is compared with changes in world fuel consumption (WFC) (Makarov, 1998). The WFC curve shows an exponential increase, which doubles approximately every 30 years, increasing 25-fold since the middle of the nineteenth century. The global air temperature anomaly curve shows a positive trend of +0.06°C/10 years (Sonechkin et al., 1997). At the same time, there are cyclic changes with periods of about 60 years. The correlation between these curves changes its sign every 30 years, varying from —0.88 (1940 1970) to +0.94 (1970 2000). Hence, there is no direct linear connection between WFC (which indirectly represents CO2 concentration in the atmosphere) and global air temperature. The authors of this study therefore conclude that the WFC increase is not an obvious cause of the increase in global air temperature.

The other causative factors could be applied, but can not add weight against the argument above.

Case Closed

The legal methodology above is used to decide the causal relationship between two variables. Clearly, in Climate Science the starting question is: Do rising fossil fuel emissions cause temperatures to rise? Those who have been following the issue know that there are many arguments underneath: Why do not temperatures always rise along with CO2? Has chance been eliminated? Are not natural factors confounding the association? And so on.

For myself, I will join in the conclusion reached by Frolov et al., who go on to further explain their position:

In general, although climate models are based on physics, they inevitably include a number of adjustable parameters that are fitted to past temperature changes. We are not aware of a single climate model based on fundamental physics without adjustable parameters that has been subjected to a rigorous test against actual climate data. Climate modelers appear to assume that the Earth’s climate would continue without change, were it not for greenhouse gas emissions. They do not take into account the possibility that natural climate cycles are also acting independently of effects induced by buildup of greenhouse gas concentrations. As we have shown in Chapter 4, there is evidence for cyclic variability of Arctic climates. Furthermore, there is considerable evidence for past variability of global climate as expressed in the so-called Medieval Warm Period (900-1100) and the Little Ice Age (1600-1850). These fluctuations appear to be as great as the temperature rise of the 20th century, yet, there was no contribution of greenhouse gases to these climate changes.

A major challenge in climate modeling is to understand the range of natural fluctuations, and separate these from climate changes induced by human activity (greenhouse gas emissions, land clearing, irrigation, …). The models neglect natural fluctuations because they have no means of incorporating them, and put the entire blame for climate changes since the 19th century on human activity. As a result, they appear to project an extreme view of the future that seems unlikely to be reliable.

Again my thanks to Dr. Bernaerts for the copy of this book: