Arctic Upset Alert March 25

A stunning turnaround by Arctic Sea Ice. In March Madness terms we could say: “We have a ballgame!.” Those saying Arctic ice would be a big loser in 2016 may have to eat crow.

OLYMPUS DIGITAL CAMERA

Yesterday NH ice extent grew dramatically to take a slight lead over the ten year average for day 84, and is 99% of 2016 maximum set on day 61. At 14.91 M km2 that is 330k km2 more than 2015 and 350k km2 more than SII is showing.

Yesterday MASIE showed a new high extent for 2016 in strategically located Barents Sea. Also the Central Arctic grew to a virtual tie with its maximum set early January 2016. All seas gained ice or held their maximums, except for a small loss in Okhotsk.

 

Earlier I compared this month in the Arctic to the March Madness of the NCAA basketball tournament, with intense competition and surprising results.  In the first half of March, the ocean water was scoring at will against the ice pack, and warnings of huge losses were announced.

In the second half, however, the ice is making a comeback, and the March outcome is still in doubt. This is important because March average extent sets the baseline for the melt season to come.

MASIE shows significant gains in the last week, while SII has grown to set a new maximum for the year on day 81. Notably, Barents has gained back 228k km2, Greenland Sea added 81k and Bering Sea 138k, and  the Central Arctic 90k. While Okhotsk has lost ice,  as readers already know, Okhotsk Sea is actually outside the Arctic Circle and will melt out completely. Barents and Greenland Seas are located at the nexus of the Arctic and the North Atlantic and impact greatly the Arctic ocean as a whole. Bering is also important positioned at the Pacific gateway into the Arctic.

For more on discrepancies between MASIE and SII see here.

Update March 26

Some have expressed concern that high pressure over the Arctic could accelerate the flow of ice out through the Fram Strait.  That is an important consideration.  In a recent post I pointed to work by Russian scientists showing that in fact the removal of ice bergs through Fram opens up area in the Central Arctic for more ice to form.  Their analysis says that after an acceleration of Fram ice loss, 4 to 6 years later there is an overall increase in ice extent, especially in the Siberian seas.

Guess what?  The massive cyclone in 2012 pushed out lots of ice, and here we are 4 years later with ice growth appearing.

More on this is here: https://rclutz.wordpress.com/2016/03/02/the-great-arctic-ice-exchange/

With the Arctic Oscillation (AO) expected to hover around neutral over the next two weeks, Arctic ice extent is unpredictable.

090913bucks-carl-sketch-master675

 

Arctic Panic Postponed

Earlier I compared this month in the Arctic to the March Madness of the NCAA basketball tournament, with intense competition and surprising results.  In the first half of March, the ocean water was scoring at will against the ice pack, and warnings of huge losses were announced.

OLYMPUS DIGITAL CAMERA

In the second half, however, the ice is making a comeback, and the March outcome is still in doubt. This is important because March average extent sets the baseline for the melt season to come. At this point, 3/4 through March, MASIE shows the average monthly extent below the ten-year average but higher than 2015.

MASIE shows significant gains in the last week, while SII has grown to set a new maximum for the year 2 days ago. Notably, Barents has gained back about 150k km2, Greenland Sea added 70k and Bering Sea 110k, while Okhotsk has lost 100k. As readers already know, Okhotsk Sea is actually outside the Arctic Circle and will melt out completely. While Barents and Greenland Seas are located at the nexus of the Arctic and the North Atlantic and impact greatly the Arctic ocean as a whole. Bering is also important positioned at the Pacific gateway into the Arctic.

For more on discrepancies between MASIE and SII see here.

With the Arctic Oscillation (AO) expected to hover around neutral over the next two weeks, Arctic ice extent is unpredictable.

090913bucks-carl-sketch-master675

 

Donald and Hilliary in Grease!

 

Grease3

Opening this summer, the new production of the classic American musical Grease.  Features Donald Trump and Hilliary Clinton in the entertaining match between the prim and proper cheerleader and the rebel hot rodder from New Jersey.

Judith Curry had good one featuring the leads from Harry Potter:

Arctic Ice March Madness

Updating Arctic ice extents for the first 20 days of March the peak ice month. Lots of changes and surprises, just like the NCAA basketball tournament.

OLYMPUS DIGITAL CAMERA

MASIE shows March 2 as the daily annual maximum, both on average over ten years, and in 2015. March 1, 2016 was the daily max, and SII shows an extent that day lower by 635k km2.  (SII refers to Sea Ice Index produced by NOAA@NSIDC)

As March has progressed, this year and last MASIE shows ice has declined. Meanwhile MASIE ten year average held steady, and 2016 SII added extent in the last week. The gap between MASIE and SII is narrowing in 2016, though SII extents still average almost 400k km2 less for March.

Ice Extents 2015 2016 Ice Extent
Region 2015080 2016080 km2 Diff.
 (0) Northern_Hemisphere 14634556 14593011 -41545
 (1) Beaufort_Sea 1070445 1070445 0
 (2) Chukchi_Sea 966006 965989 -17
 (3) East_Siberian_Sea 1087137 1087120 -17
 (4) Laptev_Sea 897845 897809 -36
 (5) Kara_Sea 920392 916674 -3719
 (6) Barents_Sea 548675 396037 -152638
 (7) Greenland_Sea 666601 557940 -108661
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1829904 1599833 -230072
 (9) Canadian_Archipelago 853214 853178 -36
 (10) Hudson_Bay 1260903 1260871 -33
 (11) Central_Arctic 3248013 3191707 -56306
 (12) Bering_Sea 657014 642235 -14779
 (13) Baltic_Sea 14462 42460 27998
 (14) Sea_of_Okhotsk 606552 1108734 502182

Comparing 2016 on day 80 with 2015 shows the important ice deficits are on the European and Canadian sides: Barents (even lower than last year), along with Greenland Sea, and Baffin Bay. In contrast to 2015, Okhotsk is much more normal this year and almost offsets the losses elsewhere.

With AER and CPC showing the Arctic Oscillation neutral presently, and 14-day forecasts for more of the same, no one knows what to expect. We can only watch and see.

http://www.cpc.ncep.noaa.gov/products/precip/CWlink/daily_ao_index/ao_index_ensm.shtml

For more on discrepancies between MASIE and SII see here.

OLYMPUS DIGITAL CAMERA

X-Weathermen are Back!

The media is again awash with claims of “human footprints” in extreme weather events, with headlines like these:

“Global warming is making hot days hotter, rainfall and flooding heavier, hurricanes stronger and droughts more severe.”

“Global climate change is making weather worse over time”

“Climate change link to extreme weather easier to gauge”– U.S. Report

“Heat Waves, Droughts and Heavy Rain Have Clear Links to Climate Change, Says National Academies”

That last one refers to a paper just released by the National Academy of Sciences Press: Attribution of Extreme Weather Events in the Context of Climate Change (2016)

And as usual, the headline claims are unsupported by the actual text. From the NAS report (here):

Attribution studies of individual events should not be used to draw general conclusions about the impact of climate change on extreme events as a whole. Events that have been selected for attribution studies to date are not a representative sample (e.g., events affecting areas with high population and extensive infrastructure will attract the greatest demand for information from stakeholders) P 107

Systematic criteria for selecting events to be analyzed would minimize selection bias and permit systematic evaluation of event attribution performance, which is important for enhancing confidence in attribution results. Studies of a representative sample of extreme events would allow stakeholders to use such studies as a tool for understanding how individual events fit into the broader picture of climate change. P 110

Correctly done, attribution of extreme weather events can provide an additional line of evidence that demonstrates the changing climate, and its impacts and consequences. An accurate scientific understanding of extreme weather event attribution can be an additional piece of evidence needed to inform decisions on climate changerelated actions. P. 112

The Indicative Without the Imperative

extreme-weather-events

The antidote to such feverish reporting is provided by Mike Hulme in a publication: Attributing Weather Extremes to ‘Climate Change’: a Review (here).

He has an insider’s perspective on this issue, and is certainly among the committed on global warming (color him concerned). Yet here he writes objectively to inform us on X-weather, without advocacy: real science journalism and a public service, really.

Overview

In this third and final review I survey the nascent science of extreme weather event attribution. The article proceeds by examining the field in four stages: motivations for extreme weather attribution, methods of attribution, some example case studies and the politics of weather event Attribution.

The X-Weather Issue

As many climate scientists can attest, following the latest meteorological extreme one of the most frequent questions asked by media journalists and other interested parties is: ‘Was this weather event caused by climate change?’

In recent decades the meaning of climate change in popular western discourse has changed from being a descriptive index of a change in climate (as in ‘evidence that a climatic change has occurred’) to becoming an independent causative agent (as in ‘climate change caused this event to happen’). Rather than being a descriptive outcome of a chain of causal events affecting how weather is generated, climate change has been granted power to change worlds: political and social worlds as much as physical and ecological ones.

To be more precise then, what people mean when they ask the ‘extreme weather blame’ question is: ‘Was this particular weather event caused by greenhouse gases emitted from human activities and/or by other human perturbations to the environment?’ In other words, can this meteorological event be attributed to human agency as opposed to some other form of agency?

The Motivations

Hulme shows what drives scientists to pursue the “extreme weather blame” question, noting four motivational factors.

Why have climate scientists over the last ten years embarked upon research to provide an answer beyond the stock phrase ‘no individual weather event can directly be attributed to greenhouse gas emissions’?  There seem to be four possible motives.

1.Curiosity
The first is because the question piques the scientific mind; it acts as a spur to develop new rational understanding of physical processes and new analytic methods for studying them.

2.Adaptation
A second argument, put forward by some, is that it is important to know whether or not specific instances of extreme weather are human-caused in order to improve the justification, planning and execution of climate adaptation.

3.Liability
A third argument for pursuing an answer to the ‘extreme weather blame’ question is inspired by the possibility of pursuing legal liability for damages caused. . . If specific loss and damage from extreme weather can be attributed to greenhouse gas emissions – even if expressed in terms of increased risk rather than deterministically – then lawyers might get interested.

The liability motivation for research into weather event attribution also bisects the new international political agenda of ‘loss and damage’ which has emerged in the last two years. . . The basic idea is to give recognition that loss and damage caused by climate change is legitimate ground for less developed countries to gain access to new international climate adaptation funds.

4. Persuasion
A final reason for scientists to be investing in this area of climate science – a reason stated explicitly less often than the ones above and yet one which underlies much of the public interest in the ‘extreme weather blame’ question – is frustration with and argument about the invisibility of climate change. . . If this is believed to be true – that only scientists can make climate change visible and real –then there is extra onus on scientists to answer the ‘extreme weather blame’ question as part of an effort to convince citizens of the reality of human-caused climate change.

Attribution Methods

Attributing extreme weather events to human influences requires different approaches, of which four broad categories can be identified.

1. Physical Reasoning
The first and most general approach to attributing extreme weather phenomena to rising greenhouse gas concentrations is to use simple physical reasoning.

General physical reasoning can only lead to broad qualitative statements such as ‘this extreme weather is consistent with’ what is known about the human-enhanced greenhouse effect. Such statements offer neither deterministic nor stochastic answers and clearly underdetermine the ‘weather blame question.’ It has given rise to a number of analogies to try to communicate the non-deterministic nature of extreme event attribution. The three most widely used ones concern a loaded die (the chance of rolling a ‘6’ has increased, but no single ‘6’ can be attributed to the biased die), the baseball player on steroids (the number of home runs hit increases, but no single home run can be attributed to the steroids) and the speeding car-driver (the chance of an accident increases in dangerous conditions, but no specific accident can be attributed to the fast-driving).

2. Classical Statistical Analysis
A second approach is to use classical statistical analysis of meteorological time series data to determine whether a particular weather (or climatic) extreme falls outside the range of what a ‘normal’ unperturbed climate might have delivered.

All such extreme event analyses of meteorological time series are at best able to detect outliers, but can never be decisive about possible cause(s). A different time series approach therefore combines observational data with model simulations and seeks to determine whether trends in extreme weather predicted by climate models have been observed in meteorological statistics (e.g. Zwiers et al., 2011, for temperature extremes and Min et al., 2011, for precipitation extremes). This approach is able to attribute statistically a trend in extreme weather to human influence, but not a specific weather event. Again, the ‘weather blame question’ remains underdetermined.

slide20

3. Fractional Attributable Risk (FAR)
Taking inspiration from the field of epidemiology, this method seeks to establish the Fractional Attributable Risk (FAR) of an extreme weather (or short-term climate) event. It asks the counterfactual question, ‘How might the risk of a weather event be different in the presence of a specific causal agent in the climate system?’

The single observational record available to us, and which is analysed in the statistical methods described above, is inadequate for this task. The solution is to use multiple model simulations of the climate system, first of all without the forcing agent(s) accused of ‘causing’ the weather event and then again with that external forcing introduced into the model.

The credibility of this method of weather attribution can be no greater than the overall credibility of the climate model(s) used – and may be less, depending on the ability of the model in question to simulate accurately the precise weather event under consideration at a given scale (e.g. a heatwave in continental Europe, a rain event in northern Thailand) (see Christidis et al., 2013a).

4. Eco-systems Philosophy
A fourth, more philosophical, approach to weather event attribution should also be mentioned. This is the argument that since human influences on the climate system as a whole are now clearly established – through changing atmospheric composition, altered land surface characteristics, and so on – there can no longer be such a thing as a purely natural weather event. All weather — whether it be a raging tempest or a still summer afternoon — is now attributable to human influence, at least to some extent. Weather is the local and momentary expression of a complex system whose functioning as a system is now different to what it would otherwise have been had humans not been active.

Results from Weather Attribution Studies

Hulme provides a table of numerous such studies using various methods, along with his view of the findings.

It is likely that attribution of temperature-related extremes using FAR methods will always be more attainable than for other meteorological extremes such as rainfall and wind, which climate models generally find harder to simulate faithfully at the spatial scales involved. As discussed below, this limitation on which weather events and in which regions attribution studies can be conducted will place important constraints on any operational extreme weather attribution system.

Political Dimensions of Weather Attribution

Hulme concludes by discussing the political hunger for scientific proof in support of policy actions.

But Hulme et al. (2011) show why such ambitious claims are unlikely to be realised. Investment in climate adaptation, they claim, is most needed “… where vulnerability to meteorological hazard is high, not where meteorological hazards are most attributable to human influence” (p.765). Extreme weather attribution says nothing about how damages are attributable to meteorological hazard as opposed to exposure to risk; it says nothing about the complex political, social and economic structures which mediate physical hazards.

And separating weather into two categories — ‘human-caused’ weather and ‘tough-luck’ weather – raises practical and ethical concerns about any subsequent investment allocation guidelines which excluded the victims of ‘tough-luck weather’ from benefiting from adaptation funds.

Contrary to the claims of some weather attribution scientists, the loss and damage agenda of the UNFCCC, as it is currently emerging, makes no distinction between ‘human-caused’ and ‘tough-luck’ weather. “Loss and damage impacts fall along a continuum, ranging from ‘events’ associated with variability around current climatic norms (e.g., weather-related natural hazards) to [slow-onset] ‘processes’ associated with future anticipated changes in climatic norms” (Warner et al., 2012:21). Although definitions and protocols have not yet been formally ratified, it seems unlikely that there will be a role for the sort of forensic science being offered by extreme weather attribution science.

Conclusion

Thank you Mike Hulme for a sane, balanced and expert analysis. It strikes me as being another element in a “Quiet Storm of Lucidity”.

Is that light the end of the tunnel or an oncoming train?

Sun and Ice

 

The warm March sun is melting the snow and ice in our neighborhood, so it seems like a good time to talk about the sun and Arctic climate change.

Figure 6.5. Annual-mean Arctic-wide air temperature anomaly time series (dotted line) correlated with estimated total solar irradiance (solid line in the top panel) from the model by Hoyt and Schatten, and with the mixing ratio of atmospheric carbon dioxide (solid line in the bottom panel) From Frovlov et al. 2009

Figure 6.5. Annual-mean Arctic-wide air temperature anomaly time series (dotted line) correlated with estimated total solar irradiance (solid line in the top panel) from the model by Hoyt and Schatten, and with the mixing ratio of atmospheric carbon dioxide (solid line in the bottom panel) From Frovlov et al. 2009

Again, I am relying on a book by Frolov et al. Climate Change in Eurasian Arctic Shelf Seas, Centennial Ice Cover Observations (with some additional more recent material below).

Of course, the most direct effect of the sun on ice is in the summer:

Short-wave solar radiation is the most significant summer-season forcing, or, more precisely, the part of it that depends on albedo and absorption by the ice cover and the sea. Due to changes in albedo not related to greenhouse gases of anthropogenic origin, this heat balance constituent can vary by several dozen W/m2 in polar regions, or one order of magnitude greater than the most optimistic assessments of the influence of greenhouse gases. P 121

And the internal oscillations of the ocean-ice-atmosphere system were discussed extensively in a previous post here:

It was noted in Sections 4.1 and 4.2 that air temperature at mid and high latitudes primarily depends on dynamic processes in the atmosphere (Alekseev, 2000; Alekseev et al., 2003; Vorobiev and Smirnov, 2003). They influence air temperature due to both advective processes and the impact of cloudiness, which depend on the type of baric system in play. In winter, this influence is particularly high in areas where anticyclones are common. Weakening of anticyclones results in increasing temperature and cloudiness. Variation in cloudiness is one of the main causes of climate change is indicated by Sherstyukov (2008). p 119

Frolov et al. explore the connection between solar activity and these atmospheric processes. They are not jumping to conclusions and recognize that uncertainty surrounds the mechanisms between Solar Activity (SA) and Arctic ice variability.

The variation of temperature matches the TSI curve far better than it matches the CO2 curve. However, the Hoyt and Schatten model for TSI is just one of many, and other models lead to very different patterns for TSI vs. year. Furthermore, climate modelers would argue that the temperature curve in the second warming epoch represents the continuation of the first warming epoch, interrupted by a period from about 1940 to about 1980 when increasing aerosol concentrations outweighed the effect of increasing greenhouse gases. Therefore, Figure 6.5 is just one representation of many that could be derived. Nevertheless, if Figure 6.5 were taken at face value, the temperature and TSI variation charts would suggest the presence of both a positive “100-year” trend and quasi 60-year cyclic oscillations.

While Figure 6.5 is suggestive, the fact remains that we really do not know how TSI varied prior to the advent of satellite measurements around 1980. Figure 6.5 demonstrates that the form of the variability of Arctic surface temperatures during the 20th century resembles the variability of the Hoyt and Schatten model for TSI. This is suggestive that variations in TSI may have been an important factor in 20th century climate change. Though the total variance of TSI from 1880 to 2000 according to Hoyt and Schatten was 384 W/m2, the simple spreading of this flow over the spherical area of the Earth is incorrect. As we show in this work, a significant part of TSI variance influences the high-latitude regions. Furthermore, as was noted in Section 5.4, Budyko (1969) concluded by calculations that solar constant variations of several tenths of % are sufficient to induce essential climate changes.

In seeking a relationship between solar variability and climate change, we may consider TSI and SA (Solar Activity). The connection between TSI and climate is direct; TSI represents the fundamental heat input from the Sun that drives our climate. However, although SA represents fundamental aspects of the dynamics of the Sun, its connection to the total power emitted by the Sun is not quite clear. SA includes energetic particle emission, electromagnetic emission in the UV and higher frequency ranges and magnetic fields. It is manifested in the Earth’s phenomena in the form of polar lights, magnetic storms, radio-communication blackouts, etc. A number of different indices are used to measure the level of SA, particularly sunspot indices (Wolf number, etc.), the intensity of solar wind, and various magnetic indices. Even though variations in TSI associated with changes in SA may be small, the impact on higher latitudes is significantly amplified by the interaction of charged solar wind particles with the Earth’s magnetic field. As shown in our work, evidence exists that variability of SA is connected to Arctic climate variations. Frolov et al. 2009 pp. 124

Conclusion

The Earth’s climate is affected by internal and external factors. The internal factors include natural hydro-meteorological, geological, and biological processes, as well as self-oscillation phenomena related to interactions in the ocean-sea ice-atmosphere-glaciers system. In addition, anthropogenic impacts are also considered to be internal factors; they are caused by the increase in concentration of greenhouse gases in the atmosphere because of human activity. External factors include solar activity, tidal and nutation phenomena, variability of the Earth’s rotation speed, fluctuations in the solar constant, fluxes of energy and charged particles from space, and other astronomical factors. p.133

Addendum

Some may claim the Hoyt and Schatten model is outdated, so I provide recent comparable results from Jan Erik Solheim October 2014 here.

Figure 4: Annual-mean EPTG over the entire Northern Hemisphere (°C/latitude; dotted blue line) and smoothed 10-yr running mean (dashed blue line) versus the estimated TSI of Hoyt and Schatten (Soon and Legates, 2013)

Figure 4: Annual-mean EPTG over the entire Northern Hemisphere (°C/latitude; dotted blue line) and smoothed 10-yr running mean (dashed blue line) versus the estimated TSI of Hoyt and Schatten (Soon and Legates, 2013)

The reconstruction by D. Hoyt and K. Schatten (1993) updated with the ACRIM data (Scafetta, 2013) gives a remarkable good correlation with the Central England temperature back to 1700. It also shows close correlation with the variation of the surface temperature at three drastically different geographic regions with the respect to climate: USA,  Arctic and China.

The excellent relationship between the TSI and the Equator-to-Pole (Arctic) temperature gradient (EPTG) is displayed in figure 4. Increase in TSI is related to decrease in temperature gradient between the Equator and the Arctic. This may be explained as an increase in TSI results in an increase in both oceanic and atmospheric heat transport to the Arctic in the warm period since 1970.

White sea ice in the Arctic melting from the sun, and also reflecting back solar energy.

 

Much Ado About Methane

Methane pollution surrounding Porter Ranch, LA, ( Photo credit: Energy Efficiency Team)

The recent leak in California attracted mass media attention and now Canada and the US have announced reductions in methane emissions, timed to show PM Trudeau’s visit was not just window dressing. But how important is methane as an issue for the climate or the environment?
From Sea Friends (here):

Methane
Methane is natural gas CH4 which burns cleanly to carbon dioxide and water. Methane is eagerly sought after as fuel for electric power plants because of its ease of transport and because it produces the least carbon dioxide for the most power. Also cars can be powered with compressed natural gas (CNG) for short distances.

In many countries CNG has been widely distributed as the main home heating fuel. As a consequence, methane has leaked to the atmosphere in large quantities, now firmly controlled. Grazing animals also produce methane in their complicated stomachs and methane escapes from rice paddies and peat bogs like the Siberian permafrost.

It is thought that methane is a very potent greenhouse gas because it absorbs some infrared wavelengths 7 times more effectively than CO2, molecule for molecule, and by weight even 20 times. As we have seen previously, this also means that within a distance of metres, its effect has saturated, and further transmission of heat occurs by convection and conduction rather than by radiation.

Note that when H20 is present in the lower troposphere, there are few photons left for CH4 to absorb:

Even if the IPCC radiative greenhouse theory were true, methane occurs only in minute quantities in air, 1.8ppm versus CO2 of 390ppm. By weight, CH4 is only 5.24Gt versus CO2 3140Gt (on this assumption). If it truly were twenty times more potent, it would amount to an equivalent of 105Gt CO2 or one thirtieth that of CO2. A doubling in methane would thus have no noticeable effect on world temperature.

However, the factor of 20 is entirely misleading because absorption is proportional to the number of molecules (=volume), so the factor of 7 (7.3) is correct and 20 is wrong. With this in mind, the perceived threat from methane becomes even less.

Further still, methane has been rising from 1.6ppm to 1.8ppm in 30 years (1980-2010), assuming that it has not stopped rising, this amounts to a doubling in 2-3 centuries. In other words, methane can never have any measurable effect on temperature, even if the IPCC radiative cooling theory were right.

Because only a small fraction in the rise of methane in air can be attributed to farm animals, it is ludicrous to worry about this aspect or to try to farm with smaller emissions of methane, or to tax it or to trade credits.

The fact that methane in air has been leveling off in the past two decades, even though we do not know why, implies that it plays absolutely no role as a greenhouse gas.

More information at THE METHANE MISCONCEPTIONS by Dr Wilson Flood (UK) here.

The Methane Monster in the Arctic

Lakes provide an escape path for the methane by creating “thaw bulbs” in the underlying soil, and lakes are everywhere appearing and disappearing in the Arctic as the permafrost melts. (Whether you get CO2 or a mixture of CO2 plus methane depends critically on water, so lakes are important for that reason also.)

It is clear from the above text that methane is not presently a threat, so concerns are raised about a possible outburst of methane from reserves under Arctic ice.  Dave Cohen explains the the alarms from methane hydrate doomers at his blog here.

From a peer-reviewed Nature Education article here, Methane Hydrates and Contemporary Climate Change,
Carolyn D. Ruppel (U.S. Geological Survey, Woods Hole, MA) lays out the science and concludes:

Catastrophic, widespread dissociation of methane gas hydrates will not be triggered by continued climate warming at contemporary rates (0.2ºC per decade; IPCC 2007) over timescales of a few hundred years. Most of Earth’s gas hydrates occur at low saturations and in sediments at such great depths below the seafloor or onshore permafrost that they will barely be affected by warming over even 103 yr. Even when CH4 is liberated from gas hydrates, oxidative and physical processes may greatly reduce the amount that reaches the atmosphere as CH4. The CO2 produced by oxidation of CH4 released from dissociating gas hydrates will likely have a greater impact on the Earth system (e.g., on ocean chemistry and atmospheric CO2 concentrations; Archer et al. 2009) than will the CH4 that remains after passing through various sinks.

Summary:

Natural Gas (75% methane) burns the cleanest with the least CO2 for the energy produced.

Leakage of methane is already addressed by efficiency improvements for its economic recovery, and will apparently be subject to even more regulations.

The atmosphere is a methane sink where the compound is oxidized through a series of reactions producing 1 CO2 and 2H20 after a few years.

GWP (Global Warming Potential) is CO2 equivalent heat trapping based on laboratory, not real world effects.

Any IR absorption by methane is limited by H2O absorbing in the same low energy LW bands.

There is no danger this century from natural or man-made methane emissions.

Give a daisy a break (h/t Derek here)

Daisy methane

 

Arctic Ocean and Ice Race March 10

 

The finals of the CMQ Canoe Race were held on Feb. 7, 2016 Over 50 teams from Quebec, Canada, France and the United States navigated the frozen waters of the Saint-Lawrence River between Quebec City and Lévis.

The annual contest between the ocean and the ice is about to heat up.
March is the peak month for Arctic ice extent, and the daily max may already be in the books. MASIE shows these maximum extents:

2016 day 61 15.08 M km2
2015 day 62 14.91 M km2
Ave.  day 62 15.10 M km2

OLYMPUS DIGITAL CAMERA

As the graph shows, 2016 is trending below the 10 yr. Average and higher than last year. Comparing the estimates with SII (Sea Ice Index from NOAA) shows how much lower are extents from that source. SII max was 14.56 on day 60. So far, SII March average is about 500k km2 behind MASIE. Since March on average is quite flat over the month, SII has time to catch up, provided it starts showing some increases or a slower decline.

For more on discrepancies between MASIE and SII see here.

This table looks in detail at day 69 km2 extent this year and last:

Region 2015069 2016069 km2 Diff.
 (0) Northern_Hemisphere 14542121 14816687 274566
 (1) Beaufort_Sea 1070445 1070445 0
 (2) Chukchi_Sea 966006 965989 -17
 (3) East_Siberian_Sea 1087137 1087120 -17
 (4) Laptev_Sea 897845 897809 -36
 (5) Kara_Sea 916436 904761 -11675
 (6) Barents_Sea 449499 479659 30160
 (7) Greenland_Sea 591426 589934 -1492
 (8) Baffin_Bay_Gulf_of_St._Lawrence 1915854 1644582 -271272
 (9) Canadian_Archipelago 853214 853178 -36
 (10) Hudson_Bay 1260903 1260854 -49
 (11) Central_Arctic 3228809 3184280 -44529
 (12) Bering_Sea 561278 641530 80252
 (13) Baltic_Sea 15672 64882 49210
 (14) Sea_of_Okhotsk 721406 1168782 447376

Overall, 2016 is 275k km2 higher. The main difference is in Baffin Bay, which is more than offset by Sea of Okhotsk and Bering Sea. Barents is slightly higher than last year, which is still much lower than average. Comparing the two years, last March had a double dip from Okhotsk and Barents low extents, along with early melting from Bering. Only one of them is low this year, though we must watch out for Baffin Bay.  Baltic Sea has a lot of ice, though a smaller basin.

Reminder: All of the marginal seas will typically melt out by September.

 

MASIE: “high-resolution, accurate charts of ice conditions”
Walt Meier, NSIDC, October 2015 article in Annals of Glaciology.

 

 

seal-of-approval-seal

 

Claim: Fossil Fuels Cause Global Warming

 

Recently I addressed this claim by referring to this chart produced by scientists from AARI.

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (4T), 1861-2000. The thin dashed line represents annual 4T, the bold line—its 13-year smoothing, and the line constructed from rectangles—WF C (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

 

In their commentary, it is clear why the data does not support claiming fossil fuels cause global warming.  From Frolov et al. 2009:

The WFC curve shows an exponential increase, which doubles approximately every 30 years, increasing 25-fold since the middle of the nineteenth century. The global air temperature anomaly curve shows a positive trend of +0.06°C/10 years (Sonechkin et al., 1997). At the same time, there are cyclic changes with periods of about 60 years. The correlation between these curves changes its sign every 30 years, varying from —0.88 (1940 1970) to +0.94 (1970 2000). Hence, there is no direct linear connection between WFC (which indirectly represents CO2 concentration in the atmosphere) and global air temperature. The authors of this study therefore conclude that the WFC increase is not an obvious cause of the increase in global air temperature.

In this post, I am bringing the analysis up to date by showing World Fossil Fuel Consumption (WFFC) compared to Global Temperature Anomalies.

OLYMPUS DIGITAL CAMERA
The WFFC numbers come from US EIA and can be accessed here. I have included only the statistics for coal, oil and gas, which comprise 91% of total energy consumed. The remainder are hydro, nuclear and other renewables. 2015 numbers are not yet available. UAH version 6 provides global temperature anomalies for the lower troposphere.

The correlation overall is moderately positive at 0.60, but the two patterns are markedly different because of the 1998 event. 1980 to 2000 (the period overlapping with the AARI graph) shows a weakly positive 0.48 correlation. From 2000 to 2014 the correlation is almost non-existent at 0.07.

Summary

In the long-term and in the recent short-term, use of fossil fuels is not the obvious cause of temperature changes. The context and background for reaching this conclusion is provided below (From the previous post.)

Legal Test of Global Warming

In a previous post (here), I discussed the Bradford Hill protocol that has become precedent for trials concerning scientific evidence for legal liability.

Bradford Hill was the jurist who brought clarity and  methodology for the courts to consider and rule on accusations such as:

Thalidimide is causing birth defects;
Asbestos dust is causing lung disease;
as well as frequent claims of causal relationships between illness, injury and conditions of work.

The Global Warming Claim

When it comes to Global Warming, the proposition is straightforward:
Rising fossil fuel emissions are causing rising global temperatures.

The procedure to test that claim is described by Nathan Schachtman here.

Proper epidemiological methodology begins with published study results which demonstrate an association between a drug and an unfortunate effect. Once an association has been found, a judgment as whether a real causal relationship between exposure to a drug and a particular birth defect really exists must be made. 

Step 1: Establish an association between two variables.
Proper epidemiological method requires surveying the pertinent published studies that investigate whether there is an association between the medication use and the claimed harm. The expert witnesses must, however, do more than write a bibliography; they must assess any putative associations for “chance, confounding or bias”:

Step 2: Rule out chance as an explanation
The appropriate and generally accepted methodology for accomplishing this step of evaluating a putative association is to consider whether the association is statistically significant at the conventional level.
“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”

Step 3: Rule out bias or confounding factors.
The studies must be structured to analyze and reject other factors or influences, such as non-random sampling, additional intervening variables such as demographic or socio-economic differences.

Step 4: Infer Causation by Applying Accepted Causative Factors
Most often legal proceedings follow the Bradford Hill factors, which are delineated here.

By way of context Bradford Hill says this:

None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Such is the legal terminology for the “null” hypothesis: As long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven.

The Causative Factors

What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?

(1) Strength. First upon my list I would put the strength of the association.

(2) Consistency: Next on my list of features to be specially considered I would place the consistency of the observed association. Has it been repeatedly observed by different persons, in different places, circumstances and times?

To test the Global Warming claim, let’s consider the association between world fuel consumption (WFC) and surface air temperatures (SAT):

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (4T), 1861-2000. The thin dashed line represents annual 4T, the bold line—its 13-year smoothing, and the line constructed from rectangles—WF C (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009


In Figure 5.1, the dynamics of global air temperature anomalies obtained from instrumental measurements over the last 140 years is compared with changes in world fuel consumption (WFC) (Makarov, 1998). The WFC curve shows an exponential increase, which doubles approximately every 30 years, increasing 25-fold since the middle of the nineteenth century. The global air temperature anomaly curve shows a positive trend of +0.06°C/10 years (Sonechkin et al., 1997). At the same time, there are cyclic changes with periods of about 60 years. The correlation between these curves changes its sign every 30 years, varying from —0.88 (1940 1970) to +0.94 (1970 2000). Hence, there is no direct linear connection between WFC (which indirectly represents CO2 concentration in the atmosphere) and global air temperature. The authors of this study therefore conclude that the WFC increase is not an obvious cause of the increase in global air temperature.

The other causative factors could be applied, but can not add weight against the argument above.

Case Closed

The legal methodology above is used to decide the causal relationship between two variables. Clearly, in Climate Science the starting question is: Do rising fossil fuel emissions cause temperatures to rise? Those who have been following the issue know that there are many arguments underneath: Why do not temperatures always rise along with CO2? Has chance been eliminated? Are not natural factors confounding the association? And so on.

For myself, I will join in the conclusion reached by Frolov et al., who go on to further explain their position:

In general, although climate models are based on physics, they inevitably include a number of adjustable parameters that are fitted to past temperature changes. We are not aware of a single climate model based on fundamental physics without adjustable parameters that has been subjected to a rigorous test against actual climate data. Climate modelers appear to assume that the Earth’s climate would continue without change, were it not for greenhouse gas emissions. They do not take into account the possibility that natural climate cycles are also acting independently of effects induced by buildup of greenhouse gas concentrations. As we have shown in Chapter 4, there is evidence for cyclic variability of Arctic climates. Furthermore, there is considerable evidence for past variability of global climate as expressed in the so-called Medieval Warm Period (900-1100) and the Little Ice Age (1600-1850). These fluctuations appear to be as great as the temperature rise of the 20th century, yet, there was no contribution of greenhouse gases to these climate changes.

A major challenge in climate modeling is to understand the range of natural fluctuations, and separate these from climate changes induced by human activity (greenhouse gas emissions, land clearing, irrigation, …). The models neglect natural fluctuations because they have no means of incorporating them, and put the entire blame for climate changes since the 19th century on human activity. As a result, they appear to project an extreme view of the future that seems unlikely to be reliable.

Again my thanks to Dr. Bernaerts for the copy of this book:

 

 

 

 

 

Legal Test of Global Warming

In a previous post (here), I discussed the Bradford Hill protocol that has become precedent for trials concerning scientific evidence for legal liability.

Bradford Hill was the jurist who brought clarity and  methodology for the courts to consider and rule on accusations such as:

Thalidimide is causing birth defects;
Asbestos dust is causing lung disease;
as well as frequent claims of causal relationships between illness, injury and conditions of work.

The Global Warming Claim

When it comes to Global Warming, the proposition is straightforward:
Rising fossil fuel emissions are causing rising global temperatures.

The procedure to test that claim is described by Nathan Schachtman here.

Proper epidemiological methodology begins with published study results which demonstrate an association between a drug and an unfortunate effect. Once an association has been found, a judgment as whether a real causal relationship between exposure to a drug and a particular birth defect really exists must be made. 

Step 1: Establish an association between two variables.
Proper epidemiological method requires surveying the pertinent published studies that investigate whether there is an association between the medication use and the claimed harm. The expert witnesses must, however, do more than write a bibliography; they must assess any putative associations for “chance, confounding or bias”:

Step 2: Rule out chance as an explanation
The appropriate and generally accepted methodology for accomplishing this step of evaluating a putative association is to consider whether the association is statistically significant at the conventional level.
“Generally accepted methodology considers statistically significant replication of study results in different populations because apparent associations may reflect flaws in methodology.”

Step 3: Rule out bias or confounding factors.
The studies must be structured to analyze and reject other factors or influences, such as non-random sampling, additional intervening variables such as demographic or socio-economic differences.

Step 4: Infer Causation by Applying Accepted Causative Factors
Most often legal proceedings follow the Bradford Hill factors, which are delineated here.

 

By way of context Bradford Hill says this:

None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Such is the legal terminology for the “null” hypothesis: As long as there is another equally or more likely explanation for the set of facts, the claimed causation is unproven.

The Causative Factors

What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?

(1) Strength. First upon my list I would put the strength of the association.

(2) Consistency: Next on my list of features to be specially considered I would place the consistency of the observed association. Has it been repeatedly observed by different persons, in different places, circumstances and times?

To test the Global Warming claim, let’s consider the association between world fuel consumption (WFC) and surface air temperatures (SAT):

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (4T), 1861-2000. The thin dashed line represents annual 4T, the bold line—its 13-year smoothing, and the line constructed from rectangles—WF C (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009

Figure 5.1. Comparative dynamics of the World Fuel Consumption (WFC) and Global Surface Air Temperature Anomaly (ΔT), 1861-2000. The thin dashed line represents annual ΔT, the bold line—its 13-year smoothing, and the line constructed from rectangles—WFC (in millions of tons of nominal fuel) (Klyashtorin and Lyubushin, 2003). Source: Frolov et al. 2009


In Figure 5.1, the dynamics of global air temperature anomalies obtained from instrumental measurements over the last 140 years is compared with changes in world fuel consumption (WFC) (Makarov, 1998). The WFC curve shows an exponential increase, which doubles approximately every 30 years, increasing 25-fold since the middle of the nineteenth century. The global air temperature anomaly curve shows a positive trend of +0.06°C/10 years (Sonechkin et al., 1997). At the same time, there are cyclic changes with periods of about 60 years. The correlation between these curves changes its sign every 30 years, varying from —0.88 (1940 1970) to +0.94 (1970 2000). Hence, there is no direct linear connection between WFC (which indirectly represents CO2 concentration in the atmosphere) and global air temperature. The authors of this study therefore conclude that the WFC increase is not an obvious cause of the increase in global air temperature.

The other causative factors could be applied, but can not add weight against the argument above.

Case Closed

The legal methodology above is used to decide the causal relationship between two variables. Clearly, in Climate Science the starting question is: Do rising fossil fuel emissions cause temperatures to rise? Those who have been following the issue know that there are many arguments underneath: Why do not temperatures always rise along with CO2? Has chance been eliminated? Are not natural factors confounding the association? And so on.

For myself, I will join in the conclusion reached by Frolov et al., who go on to further explain their position:

In general, although climate models are based on physics, they inevitably include a number of adjustable parameters that are fitted to past temperature changes. We are not aware of a single climate model based on fundamental physics without adjustable parameters that has been subjected to a rigorous test against actual climate data. Climate modelers appear to assume that the Earth’s climate would continue without change, were it not for greenhouse gas emissions. They do not take into account the possibility that natural climate cycles are also acting independently of effects induced by buildup of greenhouse gas concentrations. As we have shown in Chapter 4, there is evidence for cyclic variability of Arctic climates. Furthermore, there is considerable evidence for past variability of global climate as expressed in the so-called Medieval Warm Period (900-1100) and the Little Ice Age (1600-1850). These fluctuations appear to be as great as the temperature rise of the 20th century, yet, there was no contribution of greenhouse gases to these climate changes.

A major challenge in climate modeling is to understand the range of natural fluctuations, and separate these from climate changes induced by human activity (greenhouse gas emissions, land clearing, irrigation, …). The models neglect natural fluctuations because they have no means of incorporating them, and put the entire blame for climate changes since the 19th century on human activity. As a result, they appear to project an extreme view of the future that seems unlikely to be reliable.

Again my thanks to Dr. Bernaerts for the copy of this book: