Taxing Carbon More Dangerous Than Not

A new study has fossil fuel activists twisting in the wind. The paper is Risk of increased food insecurity under stringent global climate change mitigation policy

The paper is behind a paywall, but some detail is available from carbonbrief Global carbon tax in isolation could ‘exacerbate food insecurity by 2050’ Excerpts in italics with my bolds and some comments.

The research finds that using a blanket “carbon tax” to restrict global warming to 2C above pre-industrial levels – which is the limit set by the Paris Agreement – would put an additional 45 million people at risk of hunger by 2050.

The new study, published in Nature Climate Change, zooms in on how implementing a uniform tax on greenhouse gas emissions from agriculture and other types of land use, in particular, could impact food security worldwide.

The introduction of a carbon tax could threaten food security in three main ways, the researchers say.

First, the tax would raise the cost of food production, especially for carbon-intensive products such as meat.

Second, the tax would raise the costs associated with agriculture expansion, which would lead to higher land rents.

Third, the tax would incentivise the production of biofuels – which would compete with food crops for space, further driving up land rates.

All three of these consequences could drive up food prices, which would be costly for the world’s lowest earners – who spend up to 60-80% of their income on food.

The new study compares how levels of hunger would differ in a world with climate change alone to a world with climate mitigation, including a uniform carbon tax.

The results show that a blanket carbon tax “would have a greater negative impact on global hunger and food consumption than the direct impacts of climate change”, the scientists say in their research paper.

Instead, policies that can help slash emissions from agriculture while aiding development should be prioritised, says lead author Dr Tomoko Hasegawa, a researcher at the International Institute for Applied Systems Analysis (IIASA) and Japan’s National Institute for Environment Studies. In a statement, she said:

Carbon pricing schemes will not bring any viable options for developing countries where there are highly vulnerable populations. Mitigation in agriculture should instead be integrated with development policies.”

To understand the impacts of mitigation efforts, the researchers compared a world where warming is limited to 2C to a world where no efforts to tackle climate change are made before 2050.

The former scenario assumes that the world shifts from a reliance on fossil fuels to low-carbon sources of energy, and that a uniform carbon price is rapidly introduced “across all sectors and regions” and is steadily increased in the coming decades.

(The scenarios use three different “socio-economic pathways” to make assumptions about how factors, such as population growth, are likely to change by 2050.)

The results show that, by 2050, the risk of hunger in some of the world’s least developed countries could be higher in the scenarios with mitigation than in the scenarios without mitigation – despite the fact that these scenarios expect greater declines in crop yields.

In the scenarios without mitigation, the number of people at risk of hunger by 2050 is expected to increase by 5-56 million.

In the scenarios with mitigation, an additional 13-170 million people could face hunger. The increase in those at risk is expected to be largest in sub-Saharan Africa and parts of South Asia, including India and Bangladesh.

The charts below show the expected changes in the number of people at risk of hunger (left) and the number of calories consumed per person per day (right) by 2050 under the mitigation (RCP2.6) and “no-mitigation” (RCP6.0) scenarios.

The expected changes in the number of people at risk of hunger (left) and the number of calories consumed per person per day (right) by 2050 under a mitigation (RCP2.6) and “no-mitigation” (RCP6.0) scenario. The average impacts of climate change (green) and mitigation via the introduction of a carbon tax (orange) are shown. Symbols show the results from different models Source: Hasegawa et al. (2018).

Comment Regarding Climate Direct Effects upon Food Security

The impacts shown in green are hypothetical, though assumed as baselline truth by the researchers. The supposition is: Climate change could threaten global food security by increasing the chance of staple crop failures in many parts of the world, such as across Africa and the US.

The fact is, staple crops are booming with increasing CO2 and the warm temperatures enjoyed by plants and humans alike. Some researchers have been working frantically to claim CO2 damages plant productivity, despite overwhelming evidence to the contrary. One line of attack claims CO2 doesn’t make plants grow larger in the face of other limiting conditions like moisture or soil nutrients. True enough, but reducing CO2 is not the cause or the answer when that happens.  See Researchers Against CO2 for the details

Another line of attack is claiming the plants are larger but are not as nutritious. Studies showed that plants can have lower concentrations of some nutrients owing to their large size from CO2 enrichment, but the take up of soil nutrients was not diminished by more CO2 or warmth. See CO2 Destroys Food Nutrition! Not.

Summary

In their research paper, the scientists say the findings “should not be interpreted to downplay the importance of future GHG emissions mitigation efforts, or to suggest that climate policy will cause more harm than good”.

Nothing could be farther from the obvious implications of this analysis.  The supposed crop failures are nowhere to be seen with every year setting new records for productivity.  So the future negative effects from rising CO2 are totally speculation.  While the economic impacts from taxing carbon pose a real and present danger to food security.

H/T GWPF BENEFITS OF GLOBAL WARMING: RECORD HARVESTS REPORTED IN NUMEROUS COUNTRIES

 

Chicxulub asteroid Apocalypse? Not so fast. August update

The Daily Mail would have you believe Apocalyptic asteroid that wiped out the dinosaurs 66 million years ago triggered 100,000 years of global warming
Chicxulub asteroid triggered a global temperature rise of 5°C (9°F).

This notion has been around for years, but dredged up now to promote fears of CO2 and global warming. And maybe it’s because of a new Jurassic Park movie coming this summer.  But it doesn’t take much looking around to discover experts who have a sober, reasonable view of the situation.

Princeton expert Gerta Keller, Professor of Geosciences at Princeton, has studied this issue since the 1990s and tells all at her website CHICXULUB: THE IMPACT CONTROVERSY Excerpts below with my bolds.

Update August 17, 2018

This is a repost as background for a fresh article in the Atlantic The Nastiest Feud in Science H/T Warren Meyer.  As noted, the parallels with global warming/climate change are notable.

b5399d9c1

Introduction to The Impact Controversy

In the 1980s as the impact-kill hypothesis of Alvarez and others gained popular and scientific acclaim and the mass extinction controversy took an increasingly rancorous turn in scientific and personal attacks fewer and fewer dared to voice critique. Two scientists stand out: Dewey McLean (VPI) and Chuck Officer (Dartmouth University). Dewey proposed as early as 1978 that Deccan volcanism was the likely cause for the KTB mass extinction, Officer also proposed a likely volcanic cause. Both were vilified and ostracized by the increasingly vocal group of impact hypothesis supporters. By the middle of the 1980s Vincent Courtillot (Physique de Globe du Paris) also advocated Deccan volcanism, though not as primary cause but rather as supplementary to the meteorite impact. Since 2008 Courtillot has strongly advocated Deccan volcanism as the primary cause for the KTB mass extinction.

(Overview from Tim Clarely, Ph.D. questioning the asteroid) In secular literature and movies, the most popular explanation for the dinosaurs’ extinction is an asteroid impact. The Chicxulub crater in Mexico is often referred to as the “smoking gun” for this idea. But do the data support an asteroid impact at Chicxulub?

The Chicxulub crater isn’t visible on the surface because it is covered by younger, relatively undeformed sediments. It was identified from a nearly circular gravity anomaly along the northwestern edge of the Yucatán Peninsula (Figure 1). There’s disagreement on the crater’s exact size, but its diameter is approximately 110 miles—large enough for a six-mile-wide asteroid or meteorite to have caused it.

Although some of the expected criteria for identifying a meteorite impact are present at the Chicxulub site—such as high-pressure and deformed minerals—not enough of these materials have been found to justify a large impact. And even these minerals can be caused by other circumstances, including rapid crystallization4 and volcanic activity.

The biggest problem is what is missing. Iridium, a chemical element more abundant in meteorites than on Earth, is a primary marker of an impact event. A few traces were identified in the cores of two drilled wells, but no significant amounts have been found in any of the ejecta material across the Chicxulub site. The presence of an iridium-rich layer is often used to identify the K-Pg (Cretaceous-Paleogene) boundary, yet ironically there is virtually no iridium in the ejecta material at the very site claimed to be the “smoking gun”!

In addition, secular models suggest melt-rich layers resulting from the impact should have exceeded a mile or two in thickness beneath the central portion of the Chicxulub crater. However, the oil wells and cores drilled at the site don’t support this. The thickest melt-rich layers encountered in the wells were between 330 and 990 feet—nowhere near the expected thicknesses of 5,000 to 10,000 feet—and several of the melt-rich layers were much thinner than 300 feet or were nonexistent.

Finally, the latest research even indicates that the tsunami waves claimed to have been generated by the impact across the Gulf of Mexico seem unlikely.

Summary from Geller

The Cretaceous-Tertiary boundary (KTB) mass extinction is primarily known for the demise of the dinosaurs, the Chicxulub impact, and the frequently rancorous thirty years-old controversy over the cause of this mass extinction. Since 1980 the impact hypothesis has steadily gained support, which culminated in 1990 with the discovery of the Chicxulub crater on Yucatan as the KTB impact site and “smoking gun” that proved this hypothesis. In a perverse twist of fate, this discovery also began the decline of this hypothesis, because for the first time it could be tested directly based on the impact crater and impact ejecta in sediments throughout the Caribbean, Central America and North America.

Two decades of multidisciplinary studies amassed a database with a sum total that overwhelmingly reveals the Chicxulub impact predates the KTB mass extinction. It’s been a wild and frequently acrimonious ride through the landscape of science and personalities. The highlights of this controversy, the discovery of facts inconsistent with the impact hypothesis, the denial of evidence, misconceptions, and misinterpretations are recounted here. (Full paper in Keller, 2011, SEPM 100, 2011).

Chicxulub Likely Happened ~100,000 years Before the KTB Extinction

Figure 42. Planktic foraminiferal biostratigraphy, biozone ages calculated based on time scales where the KTB is placed at 65Ma, 65.5Ma and 66Ma, and the relative age positions of the Chicxulub impact, Deccan volcanism phases 2 and 3 and climate change, including the maximum cooling and maximum warming (greenhouse warming) and the Dan-2 warm event relative to Deccan volcanism.

Most studies surrounding the Chicxulub impact crater have concentrated on the narrow interval of the sandstone complex or so-called impact-tsunami. Keller et al. (2002, 2003) placed that interval in zone CF1 based on planktic foraminiferal biostratigraphy and specifically the range of the index species Plummerita hantkeninoides that spans the topmost Maastrichtian. Zone CF1. The age of CF1 was estimated to span the last 300ky of the Maastrichtian based on the old time scale of Cande and Kent (1995) that places the KTB at 65Ma. The newer time scale (Gradstein et al., 2004) places the KTB at 65.5Ma, which reduces zone CF1 to 160ky.

By early 2000 our team embarked on an intensive search for impact spherules below the sandstone complex throughout NE Mexico. Numerous outcrops were discovered with impact spherule layers in planktic foraminiferal zone CF1 below the sandstone complex and we suggested that the Chicxulub impact predates the KTB by about 300ky (Fig. 42; Keller et al., 2002, 2003, 2004, 2005, 2007, 2009; Schulte et al., 2003, 2006).

Time scales change with improved dating techniques. Gradstein et al (2004) proposed to place the KTB at 65.5 Ma, (Abramovich et al., 2010). This time scale is now undergoing further revision (Renne et al., 2013) placing the KTB at 66 Ma, which reduces zone CF to less than 100ky. By this time scale, the age of the Chicxulub impact predates the KTB by less than 100ky based on impact spherule layers in the lower part zone CF1. See Fig. 42 for illustration.

Unfortunately, this wide interest rarely resulted in integrated interdisciplinary studies or joint discussions to search for common solutions to conflicting results. Increasingly, in a perverse twist of science new results became to be judged by how well they supported the impact hypothesis, rather than how well they tested it. An unhealthy US versus THEM culture developed where those who dared to question the impact hypothesis, regardless of the solidity of the empirical data, were derided, dismissed as poor scientists, blocked from publication and getting grant funding, or simply ignored. Under this assault, more and more scientists dropped out leaving a nearly unopposed ruling majority claiming victory for the impact hypothesis. In this adverse high-stress environment just a small group of scientists doggedly pursued evidence to test the impact hypothesis.

No debate has been more contentious during the past thirty years, or has more captured the imagination of scientists and public alike, than the hypothesis that an extraterrestrial bolide impact was the sole cause for the KTB mass extinction (Alvarez et al., l980). How did this hypothesis evolve so quickly into a virtually unassailable “truth” where questioning could be dismissed by phrases such as “everybody knows that an impact caused the mass extinction”, “only old fashioned Darwinian paleontologists can’t accept that the mass extinction was instantaneous”, “paleontologists are just bad scientists, more like stamp collectors”, and “it must be true because how could so many scientists be so wrong for so long.” Such phrases are reminiscent of the beliefs that the Earth is flat, that the world was created 6000 years ago, that Noah’s flood explains all geological features, and the vilification of Alfred Wegner for proposing that continents moved over time.

Update Published at National Geographic February 2018 By Shannon Hall Volcanoes, Then an Asteroid, Wiped Out the Dinosaur

What killed the dinosaurs? Few questions in science have been more mysterious—and more contentious. Today, most textbooks and teachers tell us that nonavian dinosaurs, along with three-fourths of all species on Earth, disappeared when a massive asteroid hit the planet near the Yucatán Peninsula some 66 million years ago.

But a new study published in the journal Geology shows that an episode of intense volcanism in present-day India wiped out several species before that impact occurred.

The result adds to arguments that eruptions plus the asteroid caused a one-two punch. The volcanism provided the first strike, weakening the climate so much that a meteor—the more deafening blow—was able to spell disaster for Tyrannosaurs rex and its late Cretaceous kin.

A hotter climate certainly helped send the nonavian dinosaurs to their early grave, says Paul Renne, a geochronologist at the University of California, Berkeley, who was not involved in the study. That’s because the uptick in temperature was immediately followed by a cold snap—a drastic change that likely set the stage for planet-wide disaster.

Imagine that some life managed to adapt to those warmer conditions by moving closer toward the poles, Renne says. “If you follow that with a major cooling event, it’s more difficult to adapt, especially if it’s really rapid,” he says.

In this scenario, volcanism likely sent the world into chaos, driving many extinctions alone and increasing temperatures so drastically that most of Earth’s remaining species couldn’t protect themselves from that second punch when the asteroid hit.

“The dinosaurs were extremely unlucky,” Wignall says.

But it will be hard to convince Sean Gulick, a geophysicist at the University of Texas at Austin, who co-led recent efforts to drill into the heart of the impact crater in Mexico. He points toward several studies that have suggested that ecosystems remained largely intact until the time of the impact.

Additionally, a forthcoming paper might make an even stronger case that the impact drove the extinction alone, notes Jay Melosh, a geophysicist at Purdue University who has worked on early results from the drilling project. It looks as though the divisive debate will continue with nearly as much ferocity as the events that rocked our world 66 million years ago.

Summary:

So if the Chicxulub asteroid didn’t kill the dinosaurs, what did? Paleontologists have advanced all manner of other theories over the years, including the appearance of land bridges that allowed different species to migrate to different continents, bringing with them diseases to which native species hadn’t developed immunity. Keller and Addate do not see any reason to stray so far from the prevailing model. Some kind of atmospheric haze might indeed have blocked the sun, making the planet too cold for the dinosaurs — it just didn’t have to have come from an asteroid. Rather, they say, the source might have been massive volcanoes, like the ones that blew in the Deccan Traps in what is now India at just the right point in history.

For the dinosaurs that perished 65 million years ago, extinction was extinction and the precise cause was immaterial. But for the bipedal mammals who were allowed to rise once the big lizards were finally gone, it is a matter of enduring fascination.

This science seems as settled as climate change/global warming, and with many of the same shenanigans.

Footnote:

cg4b87e76a6ff940

the-extinction-dinosaurs

Strange Days and Witch Hunts

Re-enactment of Renfrewshire Witch Hunt of 1697

Contrary to conventional wisdom, witch hunting did not happen much during the Middle Ages, since in most places it was illegal  to believe witches existed. Most of the witch hunts occurred during what’s called “the Renaissance.” Witch hunting continued though the “Age of Rationalism” and for the most part ended about in the middle of the “Age of Enlightenment” (in Europe at least).

As a general rule, witches were not hunted as witches, instead it fell under the larger banner of “heresy.” Pretty much what is going on now in targeting climatism unbelievers. Since suspected witches were tried as heretics instead of as witches, it makes getting exact numbers impossible. And so much for modern reasonable people being adverse to condemning and destroying others with differing beliefs.

Strange days have found us
Strange days have tracked us down
They’re going to destroy our casual joys
Lyrics from song “Strange Days”, The Doors 1967

The lyrics from the Doors classic song “Strange Days” seem (strangely) appropriate today with all of the lashing out of the climate alarmist movement. There are subpoenas flying around and multiple accusations against corporations, contrarian scientists, think tanks and even the federal government for not thinking and acting correctly to “fight climate change.”

Most recently we have the fires of hell awaiting all of us in the “Hothouse” earth depicted by climatists unless repentance occurs in the form of draconian reforms imposed by international self-appointed experts.  As well we are now seeing warning labels on contrarian videos, a new step in pointing out modern witches (deniers).

Steven Hayward writes this week in a Powerline article Make Socialism Scientific Again with insights into what is going on (H/T John Ray)  Of particular relevance is the excerpt below in italics with my bolds.

I’ve never met Prof. Clark, and don’t know him at all, but he is the author of one of my very favorite articles about the institutional problems of science and politics way back in 1980: “Witches, Floods, and Wonder Drugs: Historical Perspectives on Risk Management.” It’s a terrific article. It was the late columnist Warren Brookes who first brought it to my attention. Clark’s comparison of the institutional incentives for witch-hunting with contemporary risk assessment (built partially on the terrific work of the late Aaron Wildavsky) has a perfect application to today’s Malthusian environmentalism and especially climate change thermaggeddonism—especially apt for the Inquisition-like treatment of dissent from climate change orthodoxy.

Some samples from Clark’s article:

Collective action by the central authority was henceforth required, and any action taken against a particular individual was justified in the name of the common good. In the case of the witch hunts, this “common good” justified the carbonization of five hundred thousand individuals, the infliction of untold suffering, and the generation of a climate of fear and distrust—all in the name of the most elite and educated institution of the day. . .

The institutionalized efforts of the Church to control witches can be seen, in retrospect, to have led to witch proliferation. Early preaching against witchcraft and its evils almost certainly put the idea of witches into many a head which never would have imagined such things if left to its own devices. The harder the Inquisition looked, the bigger its staff, the stronger its motivation, the more witches it discovered. .

Since the resulting higher discovery rate of witch risks obviously justifies more search effort, the whole process becomes self-contained and self-amplifying, with no prospect of natural limitation based on some externally determined “objective” frequency of witch risks in the environment. . .

In witch hunting, accusation was tantamount to conviction. Acquittal was arbitrary, dependent on the flagging zeal of the prosecutor. It was always reversible if new evidence appeared. You couldn’t win, and you could only leave the game by losing. The Inquisition’s principal tool for identifying witches was torture. The accused was asked if she was a witch. If she said no, what else would you expect of a witch? So she was tortured until she confessed the truth. The Inquisitors justified ever more stringent tortures on the grounds that it would be prohibitively dangerous for a real witch to escape detection. Of course an innocent person would never confess to being a witch (a heretic with no prospects of salvation) under mere physical suffering. The few who lived through such tests were likely to spend the rest of their lives as physical or mental cripples. Most found it easier to give up and burn.

You can see here an early version of the “precautionary principle” (“The Inquisitors justified ever more stringent tortures on the grounds that it would be prohibitively dangerous for a real witch to escape detection”) and many other prominent traits of the climate campaign.

Here is Clark’s killer sentence:

Many of the risk assessment procedures used today are logically indistinguishable from those used by the Inquisition.

And this coda, for which you should swap out “risk assessors” with “climate change advocates”:

Today, anyone querying the zeal of the risk assessors is accused at least of callousness, in words almost identical to those used by the Malleusfive hundred years ago. The accused’s league with the devil against society is taken for granted. Persecution in the press, courts, and hearing rooms is unremitting, and even the weak rules of evidence advanced by the “science” of risk assessment are swept away in the heat of the chase. This is not to say that risks don’t exist, or that assessors are venal. It is to insist that skeptical, open inquiry remains theory rather than practice in the majority of today’s risk debates. That those debates are so often little more than self-deluding recitations of personal faith should not be surprising.

Cue the refrain that “97 percent of scientists believe in climate change.” Believe? It would seem the Inquisition never really went away: it just changed institutions and identified a different class of witches to hunt down.

Scientists are the equal of any other citizens, and are perfectly entitled to their political opinions. But to represent their opinions with the veneer of scientific authority, as is done here, degrades science, and contributes to the decline in public regard for the scientific community. Prof. Kerry Emanuel of MIT, a “mainstream” climate scientist, put the matter well a few years back:

Scientists are most effective when they provide sound, impartial advice, but their reputation for impartiality is severely compromised by the shocking lack of political diversity among American academics, who suffer from the kind of group-think that develops in cloistered cultures. Until this profound and well-documented intellectual homogeneity changes, scientists will be suspected of constituting a leftist think tank.

Instead of offering vague political nostrums like this article, scientists who are sincerely convinced of the high probability of doom from climate change ought to be offering the specs for the technical changes that need to be made to energy supply (i.e., what carbon intensities, what kind of pollution mitigation, what kind of “geoengineering” strategies, etc). To their credit, many scientists do just this. This group of authors clearly want to be in a different line of work—or at least ought to be.

Summary:

As climate alarmists continue to amp up the fear factor to achieve their political aims, they risk unleashing the heart of darkness hidden under the surface of civil society.

Background:

Summer “Hothouse” Silliness 

Perverse Postmodern Climate: Retreat from Reason

Head, Heart and Science Updated

Greenland Viking Science in Depth

 

Eric the Red slept here: Qassiarsuk features replicas of a Viking church and longhouse. (Ciril Jazbec)

Update August 9 2018

With an article just published in South China Morning Post and reblogged in GWPF,  I am reposting this more in depth discussion of the Greenland Vikings.  It was originally published in 2017 with information and graphics drawn from a fine essay in the Smithsonian Magazine.

It is refreshing to come across scientists researching a question without the corrupting need to scare the public or to confirm some personal, professional or moral fear of the future. In this case I refer to a wonderful Smithsonian article on the question: Why Did Greenland’s Vikings Vanish? Newly discovered evidence is upending our understanding of how early settlers made a life on the island — and why they suddenly disappeared.

Some excerpts below give the flavor of this persistent effort by researchers unrewarded by the availability of huge grants that now flow to the once-lowly climatologists.  The whole article is fascinating to anyone with curiosity.

The Mystery of Greenland Vikings

But the documents are most remarkable—and baffling—for what they don’t contain: any hint of hardship or imminent catastrophe for the Viking settlers in Greenland, who’d been living at the very edge of the known world ever since a renegade Icelander named Erik the Red arrived in a fleet of 14 longships in 985. For those letters were the last anyone ever heard from the Norse Greenlanders.

They vanished from history.

Europeans didn’t return to Greenland until the early 18th century. When they did, they found the ruins of the Viking settlements but no trace of the inhabitants. The fate of Greenland’s Vikings—who never numbered more than 2,500—has intrigued and confounded generations of archaeologists.

Those tough seafaring warriors came to one of the world’s most formidable environments and made it their home. And they didn’t just get by: They built manor houses and hundreds of farms; they imported stained glass; they raised sheep, goats and cattle; they traded furs, walrus-tusk ivory, live polar bears and other exotic arctic goods with Europe. “These guys were really out on the frontier,” says Andrew Dugmore, a geographer at the University of Edinburgh. “They’re not just there for a few years. They’re there for generations—for centuries.”

So what happened to them?

The Conventional Wisdom

Thomas McGovern used to think he knew. An archaeologist at Hunter College of the City University of New York, McGovern has spent more than 40 years piecing together the history of the Norse settlements in Greenland. With his heavy white beard and thick build, he could pass for a Viking chieftain, albeit a bespectacled one. Over Skype, here’s how he summarized what had until recently been the consensus view, which he helped establish: “Dumb Norsemen go into the north outside the range of their economy, mess up the environment and then they all die when it gets cold.”

Thomas McGovern (with Viking-era animal bones); The Greenlanders’ end was “grim.” (Reed Young)

Accordingly, the Vikings were not just dumb, they also had dumb luck: They discovered Greenland during a time known as the Medieval Warm Period, which lasted from about 900 to 1300. Sea ice decreased during those centuries, so sailing from Scandinavia to Greenland became less hazardous. Longer growing seasons made it feasible to graze cattle, sheep and goats in the meadows along sheltered fjords on Greenland’s southwest coast. In short, the Vikings simply transplanted their medieval European lifestyle to an uninhabited new land, theirs for the taking.

But eventually, the conventional narrative continues, they had problems. Overgrazing led to soil erosion. A lack of wood—Greenland has very few trees, mostly scrubby birch and willow in the southernmost fjords—prevented them from building new ships or repairing old ones. But the greatest challenge—and the coup de grâce—came when the climate began to cool, triggered by an event on the far side of the world.

In 1257, a volcano on the Indonesian island of Lombok erupted. Geologists rank it as the most powerful eruption of the last 7,000 years. Climate scientists have found its ashy signature in ice cores drilled in Antarctica and in Greenland’s vast ice sheet, which covers some 80 percent of the country. Sulfur ejected from the volcano into the stratosphere reflected solar energy back into space, cooling Earth’s climate. “It had a global impact,” McGovern says. “Europeans had a long period of famine”—like Scotland’s infamous “seven ill years” in the 1690s, but worse. “The onset was somewhere just after 1300 and continued into the 1320s, 1340s. It was pretty grim. A lot of people starving to death.”

Amid that calamity, so the story goes, Greenland’s Vikings—numbering 5,000 at their peak—never gave up their old ways. They failed to learn from the Inuit, who arrived in northern Greenland a century or two after the Vikings landed in the south. They kept their livestock, and when their animals starved, so did they. The more flexible Inuit, with a culture focused on hunting marine mammals, thrived.

An aerial photograph of southern Greenland. (Ciril Jazbec)

New Evidence Overturns Past Conceptions

But over the last decade a radically different picture of Viking life in Greenland has started to emerge from the remains of the old settlements, and it has received scant coverage outside of academia. “It’s a good thing they can’t make you give your PhD back once you’ve got it,” McGovern jokes. He and the small community of scholars who study the Norse experience in Greenland no longer believe that the Vikings were ever so numerous, or heedlessly despoiled their new home, or failed to adapt when confronted with challenges that threatened them with annihilation.

“It’s a very different story from my dissertation,” says McGovern. “It’s scarier. You can do a lot of things right—you can be highly adaptive; you can be very flexible; you can be resilient—and you go extinct anyway.” And according to other archaeologists, the plot thickens even more: It may be that Greenland’s Vikings didn’t vanish, at least not all of them.

A New Understanding How Vikings Lived on Greenland

 

The Vikings established two outposts in Greenland: one along the fjords of the southwest coast, known historically as the Eastern Settlement, where Gardar is located, and a smaller colony about 240 miles north, called the Western Settlement. Nearly every summer for the last several years, Konrad Smiarowski has returned to various sites in the Eastern Settlement to understand how the Vikings managed to live here for so many centuries, and what happened to them in the end.

“Probably about 50 percent of all bones at this site will be seal bones,” Smiarowski says as we stand by the drainage ditch in a light rain. He speaks from experience: Seal bones have been abundant at every site he has studied, and his findings have been pivotal in reassessing how the Norse adapted to life in Greenland. The ubiquity of seal bones is evidence that the Norse began hunting the animals “from the very beginning,” Smiarowski says. “We see harp and hooded seal bones from the earliest layers at all sites.”

A seal-based diet would have been a drastic shift from beef-and-dairy-centric Scandinavian fare. But a study of human skeletal remains from both the Eastern and Western settlements showed that the Vikings quickly adopted a new diet. Over time, the food we eat leaves a chemical stamp on our bones—marine-based diets mark us with different ratios of certain chemical elements than terrestrial foods do. Five years ago, researchers based in Scandinavia and Scotland analyzed the skeletons of 118 individuals from the earliest periods of settlement to the latest. The results perfectly complement Smiarow­ski’s fieldwork: Over time, people ate an increasingly marine diet, he says.

Judging from the bones Smiarowski has uncovered, most of the seafood consisted of seals—few fish bones have been found. Yet it appears the Norse were careful: They limited their hunting of the local harbor seal, Phoca vitulina, a species that raises its young on beaches, making it easy prey. (The harbor seal is critically endangered in Greenland today due to overhunting.) “They could have wiped them out, and they didn’t,” Smiarowski says. Instead, they pursued the more abundant—and more difficult to catch—harp seal, Phoca groenlandica, which migrates up the west coast of Greenland every spring on the way from Canada. Those hunts, he says, must have been well-organized communal affairs, with the meat distributed to the entire settlement—seal bones have been found at homestead sites even far inland. The regular arrival of the seals in the spring, just when the Vikings’ winter stores of cheese and meat were running low, would have been keenly anticipated.

The Vikings Were Players in the Ivory Trade

The Norse harnessed their organizational energy for an even more important task: annual walrus hunts. Smiarowski, McGovern and other archaeologists now suspect that the Vikings first traveled to Greenland not in search of new land to farm—a motive mentioned in some of the old sagas—but to acquire walrus-tusk ivory, one of medieval Europe’s most valuable trade items. Who, they ask, would risk crossing hundreds of miles of arctic seas just to farm in conditions far worse than those at home? As a low-bulk, high-value item, ivory would have been an irresistible lure for seafaring traders.

After hunting walruses to extinction in Iceland, the Norse must have sought them out in Greenland. They found large herds in Disko Bay, about 600 miles north of the Eastern Settlement and 300 miles north of the Western Settlement. “The sagas would have us believe that it was Erik the Red who went out and explored [Greenland],” says Jette Arneborg, a senior researcher at the National Museum of Denmark, who, like McGovern, has studied the Norse settlements for decades. “But the initiative might have been from elite farmers in Iceland who wanted to keep up the ivory trade—it might have been in an attempt to continue this trade that they went farther west.”

A bishop’s ring and top of his crosier from the Gardar ruins. (Ciril Jazbec)

How profitable was the ivory trade? Every six years, the Norse in Greenland and Iceland paid a tithe to the Norwegian king. A document from 1327, recording the shipment of a single boatload of tusks to Bergen, Norway, shows that that boatload, with tusks from 260 walruses, was worth more than all the woolen cloth sent to the king by nearly 4,000 Icelandic farms for one six-year period.

Archaeologists once assumed that the Norse in Greenland were primarily farmers who did some hunting on the side. Now it seems clear that the reverse was true. They were ivory hunters first and foremost, their farms only a means to an end. Why else would ivory fragments be so prevalent among the excavated sites? And why else would the Vikings send so many able-bodied men on hunting expeditions to the far north at the height of the farming season? “There was a huge potential for ivory export,” says Smiarowski, “and they set up farms to support that.” Ivory drew them to Greenland, ivory kept them there, and their attachment to that toothy trove may be what eventually doomed them.

A New Theory Why Viking Greenland Settlements Failed

For all their intrepidness, though, the Norse were far from self-sufficient, and imported grains, iron, wine and other essentials. Ivory was their currency. “Norse society in Greenland couldn’t survive without trade with Europe,” says Arneborg, “and that’s from day one.”

Then, in the 13th century, after three centuries, their world changed profoundly. First, the climate cooled because of the volcanic eruption in Indonesia. Sea ice increased, and so did ocean storms—ice cores from that period contain more salt from oceanic winds that blew over the ice sheet. Second, the market for walrus ivory collapsed, partly because Portugal and other countries started to open trade routes into sub-Saharan Africa, which brought elephant ivory to the European market. “The fashion for ivory began to wane,” says Dugmore, “and there was also the competition with elephant ivory, which was much better quality.” And finally, the Black Death devastated Europe. There is no evidence that the plague ever reached Greenland, but half the population of Norway—which was Greenland’s lifeline to the civilized world—perished.

The Norse probably could have survived any one of those calamities separately. After all, they remained in Greenland for at least a century after the climate changed, so the onset of colder conditions alone wasn’t enough to undo them. Moreover, they were still building new churches—like the one at Hvalsey—in the 14th century. But all three blows must have left them reeling. With nothing to exchange for European goods—and with fewer Europeans left—their way of life would have been impossible to maintain. The Greenland Vikings were essentially victims of globalization and a pandemic.

Summary

So there is a climate angle to the story of Greenland Vikings. Unlike climate alarmists, these scientists looked deeper and found a more complicated truth. Of course, even this explanation is provisional, because we are talking about science, after all.

OLYMPUS DIGITAL CAMERA

Fighting Plasticphobia


Despite the welcome presence of plastic items in our lives, there is mounting plasticphobia driven by the usual suspects: Multi Million Dollar enterprises like Greenpeace, Sierra Club, Natural Resources Defense Council, etc. The media and websites stoke fears and concerns about traces of chemicals used in making plastic products. The basic facts need reminding in this overheated climate of fear, so this post exposes widely observed poppycock about plastics with facts to put things in perspective.

Definition: pop·py·cock ˈpäpēˌkäk/informal noun meaning nonsense.
Synonyms: nonsense, rubbish, claptrap, balderdash, blather, moonshine, garbage; Origin: mid 19th century: from Dutch dialect pappekak, from pap ‘soft’ + kak ‘dung.’

Below are some points to consider in favor of plastics.  Examples below frequently mention plastic bags, bottles, and food containers, all subject to demonizing reports from activists.

Plastics are functional.

It feels like we have always had plastic. It is so widespread in our lives that it’s hard to imagine a time without it. But in reality, plastic products were only introduced in the 1950s. That was a time when the Earth’s population was 2.5 billion people and the global annual production of plastic was 1.5 million tonnes. Now, nearly 70 years later, plastic production exceeds 300 million tonnes a year and the world population is on its way to 8 billion. If this trend continues, another 33 billion tonnes of plastic will have accumulated around the planet by 2050.

Versatile plastics inspire innovations that help make life better, healthier and safer every day. Plastics are used to make bicycle helmets, child safety seats and airbags in automobiles. They’re in the cell phones, televisions, computers and other electronic equipment that makes modern life possible. They’re in the roofs, walls, flooring and insulation that make homes and buildings energy efficient. And plastics in packaging help keep foods safe and fresh.

Example: Conventional plastic shopping bags are not just a convenience, but a necessity. Plastic shopping bags are multi-use/multi-purpose bags with a shorter life. They are used not just as carry bags for groceries, but are essential – reused to help manage household and pet waste.

They are not just a convenience to carry groceries, but a necessity playing an important role to facilitate impulse purchases and for the management of household and pet waste; and in Toronto, organics collection. They have very high alternate use rate in Ontario of 59.1% (Ontario MOE (data).

A move to reusable bags will not eliminate the need for shorter-life bags. Householders will have to supplement their use of reusable bags with a paper or kitchen catcher type bags for household and pet waste. In Ireland, the virtual elimination of plastic bags because of a high bag tax, led to a 77% increase in the purchase of kitchen catchers which contain up to 76% more plastic than conventional plastic bags and a 21% increase in plastics consumed. The fact they are a necessity is reinforced by Decima Research which shows that 76% of Canadians would purchase kitchen catchers if plastic shopping bags are not available at retail check outs.

Beyond bags, manufacture of goods such as automobiles increasingly use plastics to reduce weight and fuel consumption, as well as meet requirements for recycling.

Plastics are Cheap.
Alternatives consume much more energy. Plastics are made mostly of petroleum refining by-products.

Paper bags generate 50 times more water pollutants, and 70% more emissions than plastic bags.
Plastic bags generate 80% less solid waste than paper bags.
Plastic bags use 40% less energy as compared to paper bags.
Even paper bags manufactured from recycled fiber utilized more fossil fuels than plastic bags.

On top of all this, if plastic bag bans like California’s end up causing people to use more paper bags — instead of bringing their reusable ones to the store — it’ll certainly end up being worse for the environment. Research shows that making a paper bag consumes about four times more energy than a plastic bag, and produces about four times more waste if it’s not recycled.

These numbers can vary based on agricultural techniques, shipping methods, and other factors, but when you compare plastic bags with food, it’s not even close. Yet for whatever reason, we associate plastic bags — but not food production — with environmental degradation. If we care about climate change, cutting down on food waste would be many, many times more beneficial than worrying about plastic bags.

Plastics are Durable.
Plastics are highly inert, do not easily degrade or decompose. Without sunlight, they can last for centuries.

Almost all bags are reusable; even the conventional plastic shopping bag has a reuse rate of between 40-60% in Canada.

Conventional plastic shopping bags are highly recyclable in Canada because there is a strong recycling network across the country. Recycling rates are quite high in most provinces.

Plastics are Abused.
Because plastic items are useful, cheap and durable, people leave them around as litter.
Plastics should be recycled or buried in landfill.

In a properly engineered landfill, nothing is meant to degrade. No bag – reusable or conventional plastic shopping bag – will decompose in landfill. which actually helps the environment by not producing greenhouse gases like methane.

This myth is based on a common misunderstanding of the purpose of landfills and how they work. Modern landfills are engineered to entomb waste and prevent decomposition, which creates harmful greenhouse gases like methane and carbon dioxide.

Plastics are Benign.
Plastics are not toxic, nor do they release greenhouse gases.

In Canada, plastic shopping bags are primarily made from a by-product of natural gas production, ethane.

Polyethylene bags are made out of ethane, a component of natural gas. Ethane is extracted to lower the BTU value of the gas in order to meet pipeline and gas utility specifications and so that the natural gas doesn’t burn too hot when used as fuel in our homes or businesses. The ethane is converted, and its BTU value is “frozen” into a solid form (polyethylene) using a catalytic process to make a plastic shopping bag.

There have been claims that chemicals in plastics can leach into food or drink and cause cancer. In particular, there have been rumours about chemicals called Bisphenol A (BPA) and dioxins. Hoax emails have spread warnings about dioxins being released when plastic containers are reused heated or frozen. These are credited to Johns Hopkins University in America, but the university denies any involvement.

Some studies have shown that small amounts of chemicals from plastic containers can end up in the food or drinks that are kept inside them. But the levels of these are very low.

Studies may also look at the effect of these chemicals on human cells. But they will often expose them to much higher levels than people are exposed to in real life. These levels are also much higher than the limits which are allowed in plastic by law. There is no evidence to show using plastic containers actually causes cancer in humans.

The European Food Safety Authority did a full scientific review of BPA in 2015 and decided there was no health risk to people of any age (including unborn children) at current BPA exposure levels. They are due to update this in 2018.

In the UK there is very strict regulation about plastics and other materials that are used for food or drink. These limits are well below the level which could cause harm in humans.

“Generally speaking, any food that you buy in a plastic container with directions to put it in the microwave has been tested and approved for safe use,” says George Pauli, associate director of Science and Policy at the US FDA’s Center for Food and Safety and Applied Nutrition.

Elizabeth Whelan, of the American Council on Science and Health, a consumer-education group in New York think that the case against BPA and phthalates has more in common with those against cyclamates and Alar than with the one against lead. “The fears are irrational,” she said. “People fear what they can’t see and don’t understand. Some environmental activists emotionally manipulate parents, making them feel that the ones they love the most, their children, are in danger.” Whelan argues that the public should focus on proven health issues, such as the dangers of cigarettes and obesity and the need for bicycle helmets and other protective equipment. As for chemicals in plastics, Whelan says, “What the country needs is a national psychiatrist.”

Plastics are A Scapegoat.
Rather than using plastics responsibly, some advocate banning them.

The type of bag you use makes less importance than what you put into it. When it comes to both climate change and trash production, eliminating plastic bags is a symbolic move, not a substantial one. Encouraging people to cut down on food waste, on the other hand, would actually mean something.

Litter audit data from major Canadian municipalities shows that plastic shopping bags are less than 1% of litter. The City of Toronto 2012 Litter Audit shows that plastic shopping bags were 0.8% of the entire litter stream.

Focus on the less than 1% of plastic bag litter does not address the other 99% of litter. Litter is a people problem, not a litter problem. Even if you removed all plastic shopping bag litter, 99 % of the litter would still be a problem.

Believe it or not, plastic bags are one of the most energy efficient things to manufacture. According to statistics, less than .05% of a barrel crude oil is used for the manufacturing of plastic bags in the US. On the other hand, 93% to 95% of each barrel is used for heating purposes and fuel.

In fact, most of the plastic bags used in the US are made from natural gas, 85% of them to be exact. And although plastic bags are made from natural gas and crude oil, the overall amount of fossil fuels they consume during their lifetime are significantly lesser than paper bags and compostable plastic.

So, banning or taxing plastic bags will really do nothing to curb America’s oil consumption. After all, it hardly uses a fraction!

Resources:  

https://news.grida.no/debunking-a-few-myths-about-marine-litter

http://www.allaboutbags.ca/myths.html

https://science.howstuffworks.com/environmental/green-science/paper-plastic1.htm

Top 7 Myths about Plastic Bags

https://www.plasticstoday.com/extrusion-pipe-profile/fear-plastics-and-what-do-about-it/13536413858942

https://www.cancerresearchuk.org/about-cancer/causes-of-cancer/cancer-controversies/plastic-bottles-and-food-containers

https://www.newyorker.com/magazine/2010/05/31/the-plastic-panic

https://www.webmd.com/food-recipes/features/mixing-plastic-food-urban-legend#3

Duped into War on Plastic

Step aside Polar Bear, It’s Turtle Time!

Everyday now, everywhere in the media someone else is lamenting the presence of plastics and proposing ways to end straws and other plastic items.  Terence Corcoran in Financial Post explains how we got here:  How green activists manipulated us into a pointless war on plastic  Excerpts in italics with my bolds.

The disruptive Internet mass-persuasion machine controlled by the major corporate tech giants, capable of twisting and manipulating the world’s population into believing concocted stories and fake news, is at it again, this time spooking billions of people into panic over plastic. Except…hold on: Those aren’t big greedy corporations and meddling foreign governments flooding the blue planet with alarming statistics of microplastics in our water and gross videos of turtles with straws stuck up their noses and dead birds with bellies stuffed with plastic waste.

As Earth Day/Week 2018 came to a close, the greatest professional twisters and hypers known to modern mass communications — green activists and their political and mainstream media enablers — had succeeded in creating a global political wing-flap over all things plastic.

That turtle video, viewed by millions and no doubt many more through Earth Day, is a classic of the genre, along with clubbed baby seals and starving polar bears. Filmed in 2015 near waters off the coast of Guanacaste, Costa Rica, the video was uploaded as news by the Washington Post in 2017 and reworked last week by the CBC into its series on the curse of plastics: “ ‘We need to rethink the entire plastics industry’: Why banning plastic straws isn’t enough.”

New York Governor Andrew Cuomo introduced a bill to ban single-use plastic shopping bags. In Ottawa, Prime Minister Justin Trudeau wants the G7 leaders to sign on to a “zero plastics waste charter” while British Prime Minister Theresa May promised to ban plastic straws.

No need for a secret data breach, a Russian bot or a covert algorithm to transform a million personal psychological profiles into malleable wads of catatonic dough. All it takes is a couple of viral videos, churning green activists and a willing mass media. “Hello, is this CBC News? I have a video here of a plastic straw being extracted from the nose of sea turtle. Interested?”

One turtle video is worth 50-million Facebook data breaches, no matter how unlikely the chances are that more than one turtle has faced the plastic-straw problem. If the object in the unfortunate turtle’s nasal passage was a plastic straw (was it analyzed?), it would have likely come from one of the thousands of tourists who visit Costa Rica to watch hundreds of plastics-free healthy turtles storm the country’s beaches for their annual egg-hatching ritual.

That the turtles are not in fact threatened by plastic straws would be no surprise. It is also hard to see how banning straws in pubs in London and fast-food joints in Winnipeg would save turtles in the Caribbean or the Pacific Ocean.

Creating such environmental scares is the work of professional green activists. A group called Blue Ocean Network has been flogging the turtle video for three years, using a propaganda technique recently duplicated by polar-bear activists. Overall, the plastic chemical scare follows a familiar pattern. Canadians will remember Rick Smith, the former executive director of Environmental Defense Canada and co-author of a 2009 book, Slow Death by Rubber Duck. In the book, Smith warned of how the toxic chemistry of everyday life was ruining our health, reducing sperm counts and threatening mothers and all of humanity. The jacket cover of Slow Death included a blurb from Sophie Gregoire-Trudeau, who expressed alarm about all the chemicals “absorbed into our bodies every day.”

To mark Earth Day 2018, orchestrated as part of a global anti-plastics movement, Smith was back at his old schtick with an op-ed in The Globe and Mail in which he warns “We must kill plastics to save ourselves.” Smith, now head of the left-wing Broadbent Institute in Ottawa, reminded readers that since his Slow Death book, a new problem has emerged, “tiny plastic particles (that) are permeating every human on earth.” He cites a study that claimed “83 per cent of tap water in seven countries was found to contain plastic micro-fibres.”

You would think Smith would have learned by now that such data is meaningless. Back in 2009, Smith issued a similar statistical warning. “A stunning 93 per cent of Americans tested have measurable amounts of BPA in their bodies.” BPA (bisphenol A) is a chemical used in plastics that Smith claimed produced major human-health problems.

Turns out it wasn’t true. The latest — and exhaustive — science research on BPA, published in February by the U.S. National Toxicology Program, concluded that “BPA produces minimal effects that were indistinguishable from background.” Based on this comprehensive research, the U.S. Food and Drug Administration said current uses of BPA “continue to be safe for consumers.”

Might the same be true for barely-measurable amounts of micro-plastics found today in bottled water and throughout the global ecosystem? That looks possible. A new meta-analysis of the effects of exposure to microplastics on fish and aquatic invertebrates suggests there may be nothing to worry about. Dan Barrios-O’Neill, an Irish researcher who looked at the study, tweeted last week that “There are of course many good reasons to want to curb plastic use. My reading of the evidence — to date — is that negative ecological effects might not be one of them.”

Instead of responding to turtle videos and images of plastic and other garbage swirling in ocean waters with silly bans and high talk of zero plastic waste, it might be more useful to zero in on the real sources of the floating waste: governments that allow it to be dumped in the oceans in the first place.

Update: August 4, 2018

Claim: There is a “sea of plastic” the size of Texas in the North Pacific Gyre north of Hawaii

First question: have you ever seen an aerial or satellite photograph of the “sea of plastic”? Probably not, because it doesn’t really exist. But it makes a good word- picture and after all plastic is full of deadly poisons and is killing seabirds and marine mammals by the thousands.

This is also fake news and gives rise to calls for bans on plastic and other drastic measures. Silly people are banning plastic straws as if they were a dire threat to the environment. The fact is a piece of plastic floating in the ocean is no more toxic than a piece of wood. Wood has been entering the sea in vast quantities for millions of years. And in the same way that floating woody debris provides habitat for barnacles, seaweeds, crabs, and many other species of marine life, so does floating plastic. That’s why seabirds and fish eat the bits of plastic, to get the food that is growing on them. While it is true that some individual birds and animals are harmed by plastic debris, discarded fishnets in particular, this is far outweighed by the additional food supply it provides. Plastic is not poison or pollution, it is litter.

Patrick Moore, PhD

Footnote:  Dare I say it?  (I know you are thinking it.): “They are grasping at straws, with no end in sight.”

 

 

Climate Comics

It takes a properly skeptical mindset to see the humor in the behavior of those believing in global warming/climate change. Here are some cartoons that came to my attention recently.

Paris Accord

bok.paris_.1


Media Climate Reporting

Biased Climate Science


Misguided Climate Policies

b45fee974eb5b959a6718fbac960afbc

the-wind-should-pick-up-any-minute-now

H/T to Lisa Benson, Chip Bok,  Mike Lester, Michael Ramirez  and Gary Varvel.  Work by Rick McKee was featured in a previous post Cavemen Climate Comics

 

 

Energy Changes Society: Transition Stories

Energy Sources and the Rise of Civilization. Source: Bill Gates

Richard Rhodes won the Pulitzer Prize for The Making of the Atomic Bomb. He’s written countless other books and his new book about energy called Energy: A Human history.

He takes us on a journey through the history behind energy transitions over time from wood to coal to oil to renewables and beyond. Some stories are well known, some much less so, a fascinating set of characters that go back in time all the way to Elizabethan England.

It also provides fascinating insights into how energy history should help us today understand a possible energy transition of the future towards lower carbon economy, to provide affordable reliable and sustainable energy for a growing global population. Rhodes shared some transition stories in an interview with Jason Bordoff at Columbia University, podcast and transcription entitled Richard Rhodes — Energy: A Human History. Excerpts below in italics are from Rhodes unless otherwise indicated, with my light editing, headers, bolds and images.

Jason Bordoff: I think some people may be familiar with more recent energy innovations electricity obviously, oil, nuclear power. But you really start with animals, with woods and what it meant for human civilization as we know it to depend on that for their energy sources and how transformative it was to then convert initially to coal and then beyond. Talk a little bit about how big, how big a deal that was and what it meant for the human experience as we know it.

From Wood to Coal in Elizabethan England

Richard Rhodes: Well, the story of the transition by the Elizabethan English from wood to coal was one of the most fascinating and in some ways comical although of course it wasn’t comical for them. One of the things that I wanted to do with this book was to tell the human stories that are behind the technologies involved, since so many books on the history of energy focused almost entirely on the technological changes.

The Wood Burning Society

And of course there are vast human stories, because changing from one source of energy to another is as much a social phenomenon as it is a technical phenomenon perhaps more so. So, the Elizabethans had been cutting down their trees in vast numbers, primarily for firewood for their homes. And they burned firewood usually on stern platforms or fire places set against the wall that didn’t have chimneys.

They liked the smell of wood and they thought that the smoke hardened their rafters, so either there was just a hole in the roof leading straight up from the fire place or they let the smoke drift through the rooms and out through the windows. Well, that was fine as long as they had enough wood, but as they cut the wood down farther and farther away from London it got more and more expensive to transport.

Substituting Coal for Wood

So, eventually it reached the point where it was really too expensive for the common people to afford, at that point the only alternative that they had was really smelly bituminous coal from New Castle up the river — up the country in the northeast, and they didn’t like its characteristics compared to wood. First of all imagine lighting a bituminous coal fire in the middle of your living room with no place for the smoke to go and imagine what – you’d be coughing and breathing from that. And then on top of that imagine resting your beef, your good English beef over a coal fire with all the sulfur that’s in coal smoke.

In a way England was just one vast coal mine made of these layers of coal which was black and dirty and smelled sulfurous when you burned it was literally the devils exponent. If the devil had hell down in the center of the earth this is where his body waste accumulated further up during the surface. Well, that obviously didn’t endear coal to the populous. So, they really struggled with it and basically what happened is the rich kept buying wood which they could afford and the poor had to find a way to survive with coal and they hated it.

A New King Adopts Coal

The transition really was a social transition when Elizabeth died at the end of the sixteen century, and just around 1598 or so King James I — VI of Scotland became the King of England. And he came down to London as James I and the Scots who had a much thinner forest to begin with up north then the English had had already switched to coal a long time ago. They had been working on coal for a hundred years.

And so Scottish coal was better quality, it didn’t have so much sulfur in it, so when the King came to London and started burning coal in the castle it became fashionable. Well, the King does it I suppose we can too, was the result and after that the transition was much facilitated. In addition they had to retrofit all the homes that didn’t have proper chimneys with chimney, which is another lesson that has extended across the entire history of energy transitions.

Converting Society to Burn Coal

In this regard it seems so simple – you find a new source of energy when it’s old – when it is causing you troubles and you switch over to it. I mean that’s the way people are talking today about wind and solar and other renewables. But it turns out, it takes anywhere from 50 to 100 years to make a full scale energy transition, because it’s not just a matter of the technology at all, it’s a matter of all sorts of social and societal changes.

In this case for example even they had to retrofit all the chimneys. They had to open coal mines and find a way to transport the coal down to London. They had to develop markets. They would sell the coal and then most of all you had to figure out how to burn it in your home without making the place un-inhabitable. So, it took a while. It was not really until the 1650s and the 1660s that coal had really moved in England and then of course they had the problem of air pollution.

Inventing the Coal Industry and Society

Well, just staying with the English, once they started digging coal they first dug of course the superficial layers that tended to out crop on hillsides. So, they could easily drain their mines just by putting in what they call adits which were to — basically channels for the water to flow out. But as they continued to dig deeper as they used the superficial coal they began to intersect the water table and the mines began flooding. They tried pumping them out with horses and what were called Rims which were basically horse turned pumps.

But that got more and more difficult as the mines continued to deepen, they were going down as far as 800 feet below ground to dig their coal. It’s hard to pump water that far with just a couple of horses. So, the solution that they found as time went on and this is now the early eighteenth century the middle eighteenth century was to develop an engine – a steam engine an early form of steam engine that was very inefficient less than 1%.

So, it was a big thing the size of a house, they would kind of sit on top of the coal mine opening to the surface and pump out the water, so that the mines could continue to be mined. This was the Newcomen engine which basically produced a vacuum which then allowed atmospheric pressure to rush in and function as a pump. That limited its function to the pressure of the atmosphere about 32 feet of lift. And therefore there continued to be a desire for innovation, a better way to pump water farther along, because if you had a – let’s say a 300 foot shaft in a coal mine.

Newcomen Atmospheric Steam Engine

Energy Necessity Calls for Innovation

The only way you could pump with a Newcomen engine would be to put the engines every 32 or so feet up and down the shaft, which was not a very efficient idea, especially since coal mines tend to release a certain amount of methane and other gases. And there were lots of explosions that people had to deal with. So, it quickly became apparent that there was a place for a better steam engine that’s where James Watt the Scotsman came along and invented a true steam engine one that worked by using steam to expand and push the piston back and forth.

And it could pump as much as its capacity was built to pump. Then of course they had the problem of moving the coal from the mine down to the river or the ocean in order to barge it to London. Again moving stuff around which turns out to be a large part of the problems in dealing with these forms of energy. At first the mines were close enough to the water to simply put the coal on a cart and roll it down hill. They used rails to do that, originally wooden rails, but then they started covering the wooden rails with cast iron plates on top to make it more efficient.

A late version of a Watt double-acting steam engine, built by D. Napier & Son (London) in 1859.

Transformation into a Coal Society

And you know, once you switch from wood to coal and as the country began to industrialize, particularly with advent of the steam engines, coal production got more and more enormous. And it wasn’t just a matter of heating homes anymore it became a matter of running factories as well. And you couldn’t do that with bags of coal on the saddles of horses, you needed some more large scale way to move the coal around.

Once the mines were farther back from the valleys where the rivers ran or the canals as they came to be, you had to find a way to move the material uphill as well as downhill. And horses weren’t going to do that job not at the scale that England was operating by then. So, someone realized that if you had a small steam engine and by then Watts engine could be made fairly small, you could mount them on wheels and move the coal with the steam engine, which is a certainly a railroad engine.

One of the early steam engine locomotives.

The Coal-Based Society Emerges

This whole story is about how self reinforcing all of these things were. You need to go deeper and deeper to get the coal for heating and cooking. You develop an innovation like the steam engine to do that, which enables an innovation to transport the coal, and then uses the very energy you were trying to get for another purpose to power that new innovation.

And once it was clear that you could move railroad carts of coal with a steam engine, someone realized that you could move people too. And England suddenly blossomed with railroads all over the country. The canal age was over and the railroad age began and it all followed from this early transition from wood to coal. And all the industrialization that came with the development of a stable, reliable source of continual power, which water power had not been, and animal power had been limited.

Here was an engine you fed it coal and it gave you – it gave you a turning wheel that would turn mills to loom cotton that would turn mills to make steel whatever you needed to do. So, it really was an innovation stage by stage just kind of piggyback from one to the next, really a fascinating transition time in history.

The Downside of Coal Energy

One of the interesting things that’s very clear, and it’s as clear today in Beijing as it was in London at 1660. The first thing you do is get your energy, you do what you have to do to increase the energy supply to your country or your society. Then as a kind of a luxury good in a way, you start looking at how to reduce the baleful side effect such as air pollution that comes along with that source of energy.

The first paper published by the newly formed royal society of London in 1664 was a study of how to improve the air in London and it was remarkably similar to today’s ideas to move industry into the suburbs, to ring the city with plant life, trees. So, this particular writer proposed all sorts of wonderful trees that put out perfume during their flowering season that should be built in a belt around London.

The King was so busy having just been restored to power – after the Roundhead Revolution that had caused his father to be beheaded– that he was much too busy selling monopolies and refilling his coffers to actually do anything about it. But the point is people were thinking about this prospect and it was not different from what happened when Pittsburgh at the turn of the century was so filled with coal smoke that from a nearby hill where you barely can see the city.

Pittsburgh train station in the 1940s.

Pittsburgh Faces Coal Air Pollution

And that was pretty much true in most American cities up until the 1950s. As they went about cleaning up their air supply in the early 1950s, there was a proposal by United States government to share the cost of building the first commercial nuclear power plant in the United States at a place called Shippingport near Pittsburgh on the river. I talked to the president of the Duquesne Light which was the company that was going to be the private contractor for this power plant.

He said you know, we sold this power plant to the city council of Pittsburgh as a green technology. People have come to think of nuclear as the devil’s excrement, but compared to burning coal, compared to burning what they had available to burn at the time, nuclear was great with its total absence of carbon production. The past can really inform the present when you look at how things have been done before and why they were done that way and what lessons they offer us to learn in the process.

Smog in Los Angeles

The stories that I tell are very much intertwined. So let’s jump to Los Angeles in the 1950s when what we now call smog was beginning to be a very serious problem there. The companies that refined oil in and around Los Angeles wanted to do whatever was available to clean up the air pollution, because it was commonly believed that it all was coming from their refineries or from trash burning.

Previously cities had been focused and states had been focused primarily on coal smoke – on smoke and its baleful effects on the atmosphere. Smog was originally smoke and fog two words combined. But in the 50s in Los Angeles they became this photochemical phenomenon that was going on in the atmosphere that was making everything look brown. And the question now was what do you do about that?

Anti-Smog Device.

Dr. Haagen-Smit at Caltech was carrying out an exercise in identifying the perfume essence of ripe pineapple. So, he had a room full of ripe pineapples that he was sucking the air in the room through a machine that included some liquid nitrogen that would freeze out of the air, the essential aroma of chemicals. It was to Dr. Haagen-Smit that the California county people turned and asked him if he could identify the component of this smog that was in the air.

So, he used the same machinery, but he put the pineapples away and he opened the window and sucked in about 30,000 liters of California smog into the room ran it through his machine and ended up with a few drops of very nasty brownish sticky material, which was essence of California photochemical smog, and identified where it came from. And its primary component with the other things like the refineries and so forth were certainly a part of it.

But the main component was automobile exhaust and that gave Los Angeles the beginning of what turned out to be a large national struggle with the automobile manufacturers to get them to put catalytic converter on their automobiles and eventually to get rid of the nitrous oxides, which was another component of automobile exhaust that was deadly. I repeat this not merely a technical book this is really a collection of the most amazing human stories.

But Haagen-Smit who was then of course put down by the great laboratories that had been turned to by automobile companies to refute his work, he with his simple experiments, he was a veteran of World War II. He had been a survivor of World War II so he knew how to make things simple. He with his laboratory work was able to identify what needed to be done and finally by the 1980s the entire country was trying to deal with smog by way of adding catalytic converter to cars.

Whale Oil and Petroleum

It’s a truism of the oil industry that the petroleum saved the whales. And they say that because one of the main sources of lighting for wealthier people – it was pretty expensive this whale oil–particularly spermaceti which was the very lovely refined oil that whales carry on their heads as a way of controlling their buoyancy. By heating and cooling the oil in their heads they can adjust their neutral buoyancy and therefore don’t sink to the bottom or rise to the top unless they want to.

So, these beautiful whales were used to make candles and were collected at the rate of 10,000 whales a year at the height of the whaling industry as Herman Melville beautifully describes it in Moby Dick. But most people couldn’t afford whale oil that was pretty expensive item. What they actually used — and I and most people had never known about this — was something that was called burning fluid which was basically the sap of the long leaf pines of south eastern United States which could be refined into turpentine and the turpentine could then be mixed with plain alcohol.

And with a little bit of menthol to sweeten the smell because turpentine burning is not a great smell, this then became something called burning fluid which is what almost everyone used in their lamps. One particular brand of burning fluid was called kerosene we know that name from its later application to petroleum I’ll jump to that in a sec. But — so most people burn lamps or they simply burn the cheap tallow candles, which smell like burning beef fat, not a great smell in your home either. Then came the discovery of petroleum in 1859 or rather the discovery of “rock oil” or “coal oil”. If you could drill for this stuff and pump it out in vast quantities, you could make all the kerosene you wanted.

There had been petroleum seeps in various parts of the country and particularly one in Pennsylvania where they tried to use the petroleum by soaking it up as it floated on the surface of streams where it oozed out from underground, in blankets, and squeezing the blankets out into a jug, and then selling that for liniment to rub on your sores and on your gums and swallow as a healthy item and so forth, if you can imagine. Anyway, once Colonel Drake went off to oil city as it came to be called and drilled a well and showed how you could pump oil out of the ground or indeed some wells would pump it for you and spate it into the air.

Gasoline, A Dangerous Byproduct, Transforms Society

All of a sudden petroleum was the new stuff, but it wasn’t the new stuff for powering machinery, nobody had found that use yet. Its first use for the next 50 years was for lighting, once they figured out how to refine petroleum into what was now called kerosene made from petroleum or it was used for the lubrication. But since the automobile hadn’t been invented and they had among their waste products their refinery — this stuff called gasoline which was much too volatile to put in a lamp.

The lamp would blow up from the fumes, so they would either pour it out on the ground to evaporate it into the air or they dump it into the streams and rivers of America in the dark of the night and so much other waste was in those days. It was beginning in the 1880s to be question among oil refiners so would they kind of run out of possible uses for their stuff. In a way the automobile saved petroleum. It was the automobile that came along just at the turn of the century and the industry took off.

Beyond the Petroleum Society

Jason Bordoff: You write about the disruptive, unexpected consequences of these innovations. As you said the whales are being slaughtered by the 10s of 1000s and oil is discovered and then that significantly reduces demand for whale oil. And then you wrote about the automobile and how one of the major problems at the turn of the century was horse populations and horse manure in cities like London and New York.  Problems were sort of solved with the technology innovation that we didn’t expect.  Does that tell you anything about what’s coming around the corner and maybe the level of humility we should have for our ability to anticipate it.

Richard Rhodes: We’re now in the middle of what I think is the largest energy transition in human history. And you know, I’ve written so much around this subject and here was the chance to take a look all the way back to what was really the beginning and the rest. One of the things that I discovered when I was working on The Making of the Atomic bomb, in fact one of the reasons I wrote that book is because we seem to be in the early 1980s at a crossroads where it looked so dangerous for the world. All the nuclear weapons brought in the world.

And it seems to me that if we went back to the beginning and took another look there might have been alternative pathways that would have led in a safer and better direction. And I thought that my thing – the same thing might be true for our energy dilemmas of today. So, that’s the reason I wrote the book.

I simply say we have to use every available energy source that isn’t carbon heavy in order to survive this largest of all energy transitions. But much of the world is just in the process of developing; that is to say people who have lived for millennia in deep poverty are slowly beginning to see the possibility. China being the most obvious example of moving up to the kind of middle class lives that we in the United States pretty much take for granted.

So, we have a double problem which is increasing the energy usage of large numbers of people around the world while at the same time reducing the carbon levels of the energy we use that is a really big challenge, bigger than people realize. And that means that we’re not going to be able to sit down and say well nuclear  is dangerous, because once in a while nuclear power plant blows up–which is true of any energy source and particularly unusual in the nuclear world by the way.

We’re going to have to find a way to work with nuclear, as well as, these other energy sources. You cannot power the world on renewables. The United States is rich enough that if it really wanted to it could probably work out a way to run its entire energy economy on renewables although I don’t think it would be a very efficient system it would be a very expensive system. But the rest of the world doesn’t really have that luxury. Right now China has on the drawing boards or in development some 125 nuclear power reactors. They are not even to deal with global warming. They are to deal with air pollution. And the Chinese are selling coal to the rest of the world unfortunately.

So, when Germany for example decided to eliminate its nuclear power and go all renewables, it has found itself compelled by its own energy demands to increase its use of brown coal which is the most carbon producing of all the various kinds of coal. They actually have increased their production of carbon dioxide since they decided to go — to eliminate their nuclear power supply.

The Italians eliminated their nuclear power, so now they buy their electricity from the French and the French of course are about 80% nuclear, which is pretty hypocritical of the Italians. I mean this is the kind of discussion that I think we’re all going to have to have and swallow hard and look again at nuclear, look again at all the other sources of energy we can think of that are not carbon producing to deal with what is the worlds – the histories most enormous energy transition yet.

 

 

On Energy Transitions

These days the media are full of stories about people setting targets to “decarbonize” the energy sources fueling their societies. Some are claiming (and some have failed notoriously) to achieve zero carbon electrification. We should take a deep breath, step back and rationally consider what is being discussed and proposed.

The History of Energy Transitions

Thanks to Bill Gates we have this helpful graph showing the progress of human civilization resulting from shifts in the mix of energy sources.

Before the 19th century, it is all biomass, especially wood. Some historians think that the Roman Empire collapsed partly because the cost of importing firewood from the far territories exceeded the benefits. More recently, the 1800’s saw the rise of coal and the industrial revolution and a remarkable social transformation, along of course with issues of mining and urban pollution. The 20th century is marked first by the discovery and use of oil and later by natural gas. Since the chart is proportional, it shows how oil and gas took greater importance, but in fact the total amount of energy produced and consumed in the modern world has grown exponentially. So energy from all sources, even biomass has increased in absolute terms.

The global view also hides the great disparity between advanced societies who exploited carbon-based energy to become wealthy and build large middle classes composed of human resources multiplying the social capital and extending the general prosperity. Those societies have also used their wealth to protect to a greater extent their natural environments.

The 21st Century Energy Concern

Largely due to reports of rising temperatures 1980 to 2000, alarms were sounded about global warming/climate change and calls to stop using carbon-based energy. To understand what “decarbonization” actually means, we have two recent resources that explain clearly what is involved and why we should be skeptical and rationally critical.

First, Master Resource describes how the anti-carbon agenda is now embedded in societal structures. Mark Krebs writes Paris Lives! “Deep Decarbonization” at DOE. Excerpts in italics with my bolds.

Despite President Trump’s announcement that the U.S. would withdraw for the Paris Agreement, the basis of that agreement–“deep decarbonization” through “beneficial electrification”–is proceeding virtually unabated. The reason that this is occurring is because it serves the purposes of the electric utility industry and their environmentalist allies, e.g., the Natural Resources Defense Council (NRDC).

According to the Paris Agreement, the fundamental strategy for climate stabilization would be by “deep decarbonization” primarily through “beneficial electrification” powered with “clean energy.” But how are these terms defined exactly?

Deep decarbonization: [4]
The primary strategy of the Paris Agreement for climate stabilization through an 80% reduction in the global use of fossil fuels to “decarbonize” the World’s energy systems by 2050.

Beneficial Electrification: [5]
Replacing consumers direct consumption of natural gas and gasoline, along with other forms of fossil fuels, and on to electricity (with the assumption that electricity generation will be dominated by “clean energy”).

Clean Energy:
Strictly interpreted, it’s just renewables. And more specifically, renewable electric generation. However, many variant definitions exist. For example, DOE includes nuclear, bioenergy and fuel cells as “clean.” And so-called clean coal also appears to qualify via “carbon capture & sequestration” (CCS) as does natural gas, if it is used as a feedstock to make electricity. Energy efficiency (e.g., “nega-watts”) is also deemed “clean energy” by some.

Think about it: Transitioning to a global clean energy economy means there must be a transition from something. By the process of elimination, about the only energy sources not clean are the direct use of fossil fuels. In addition to natural gas direct use, “not clean energy” also includes gasoline, propane, etc. Regardless, “clean energy” (i.e. electrification) is being put forth as the universal cure without disclosure of side effects. In essence the ‘clean energy’ future striven for by EERE exports environmental impacts to others and at high costs. Such non-climate related impacts are ignored.

Whether it’s called regulatory capture, rent seeking or political capitalism, the result is the same: Power accrues to the powerful. In addition to receiving taxpayer funding, advocates of “deep decarbonization” have profited greatly by climate change fear mongering for donations as well as from the deep pockets of Tom Steyer and the like. And now these advocates have officially joined forces with the electric utility industry as evidenced by the recent pact between NRDC and EEI that includes the pursuit of “efficient electrification of transportation, buildings, and facilities.” [21]

In large measure, EERE’s current activities should be viewed as inappropriate subsidies for deep decarbonization via electrification in contravention of President Trump’s proclamation to withdraw from the Paris Agreement. It is also contrary to President Trump’s Executive Order 13783.

Decarbonists in Denial of History

Against this backdrop of imperatives against fossil fuels, we have Lessons from technology development for energy and sustainability by Cambridge Professor Michael J. Kelly (H/T Friends of Science). Excerpts in italics with my bolds.

Abstract: There are lessons from recent history of technology introductions which should not be forgotten when considering alternative energy technologies for carbon dioxide emission reductions.

The growth of the ecological footprint of a human population about to increase from 7B now to 9B in 2050 raises serious concerns about how to live both more efficiently and with less permanent impacts on the finite world. One present focus is the future of our climate, where the level of concern has prompted actions across the world in mitigation of the emissions of CO2. An examination of successful and failed introductions of technology over the last 200 years generates several lessons that should be kept in mind as we proceed to 80% decarbonize the world economy by 2050. I will argue that all the actions taken together until now to reduce our emissions of carbon dioxide will not achieve a serious reduction, and in some cases, they will actually make matters worse. In practice, the scale and the different specific engineering challenges of the decarbonization project are without precedent in human history. This means that any new technology introductions need to be able to meet the huge implied capabilities. An altogether more sophisticated public debate is urgently needed on appropriate actions that (i) considers the full range of threats to humanity, and (ii) weighs more carefully both the upsides and downsides of taking any action, and of not taking that action.

Key Points

Only fossil fuels and nuclear fuels have the ability to power megacities in 2050, when over half of the then 9B people will live in them.

As the more severe predictions of climate change over the last 25 years are simply not happening, it makes no sense to deploy the more costly options for renewable energy.

Abandoned infrastructure projects (such as derelict wind and solar farms in the Mojave desert) remain to have their progenitors mocked.

In this review, I want to concentrate on the measures taken to reduce the global emissions of carbon dioxide, and how the lessons from recent history of technology introductions can inform the decarbonization project. I want to review the last 20 years in particular and see what this portends for the next 40 years which will take us beyond 2050, which is the pivotal date in the public discourse. A Royal Commission into Environmental Pollution in 2000 advocated a 60% reduction of carbon dioxide emissions for the UK by 2050. 14 The date was fixed by the response to the enquiry as to when energy from nuclear fusion might supply 10% of the world’s energy needs. The answer was not before 2050, and we will need to get there without it. The revision from 60% to 80% reduction came from concern that developed countries should make allowances for developing countries using fossil fuel to escape poverty, i.e., they can take the same route as developed countries did to their relative affluence.

We have had over 20 years since the first Earth Summit in Rio de Janeiro in 1992, where 1990 emissions of carbon dioxide were agreed upon as the benchmark for reductions. Before discussing specific technologies, I want to establish the scale of the challenge in engineering, technology, and project delivery terms: this does include economics, societal attitudes, and the public discourse. I also discuss some engineering fundamentals. I will then summarize the many lessons of technology introductions, the preparation for other global challenges, and finally discuss a realistic way forward.

Scale

It is important to note the scale of the perceived problem. The entire history of modern civilization that started with the first industrial revolution has been enabled by the burning of fossil fuels. Our mobility, our health and lifestyles, our diet and its variety, our education system, particularly at the higher level, and our high culture would be quite impossible without fossil fuels, which have provided over 90% of the energy consumed on the earth since 1800. Today, geothermal, hydro- and nuclear power, together with the historic biofuels of wood and straw, account for about 15% of our energy use. 18 Even though it is 40 years since the first oil shocks kick-started the modern renewable energy developments (wind, solar, and cultivated biomass), we still get rather less than 1% of our world energy from these sources. Indeed the rate at which fossil fuels are growing is seven times that at which the low carbon energies are growing, as the ratio of fossil fuel energy used to total energy used has remained unchanged since 1990 at 85%. 19 The call to decarbonize the global economy by 80% by 2050 can now only be described as glib in my opinion, as the underlying analysis shows it is only possible if we wish to see large parts of the population die from starvation, destitution or violence in the absence of enough low-carbon energy to sustain society.

A further insight into the scale of present day energy consumption is as follows. In Europe, today, we use about 6–7 times as much energy per person per day as was used in 1800, and there are seven times as many people on earth now as compared with then. 18 The energy of 1800 was expended on heating and lighting one room in a house and producing hot water used in that same room, and on the purchase of local produce and manufactures. By examining the breakdown of today’s energy usage in the UK and Europe, 20 this energy use persists today, but with lighting and central heating of whole buildings. In addition, Europeans today use as much energy per person per day on private motoring as they used in total in 1800. They use an equal amount on mobility through public transport: trains, ships, and aeroplanes. Three times the personal consumption of 1800 is used in the manufacture and logistics of things we consume or use, such as food or manufactured goods.

Over the next 20 years, the World Bank estimates that the middle class will rise from 3B to 5B, on the basis of which BP estimates a further increase in global energy demand of 40% still to be met in the main by fossil fuels. The graph in Fig. 1(a) is on the wrong scale to show that the total installed renewable energy capacity as of today is equal to the combined capacity of the nuclear power plants shut down in Japan and scheduled to close in Germany, making the challenge of carbon free energy impossible to meet.

Energy return on investment (EROI)

The debate over decarbonization has focussed on technical feasibility and economics. There is one emerging measure that comes closely back to the engineering and the thermodynamics of energy production. The energy return on (energy) investment is a measure of the useful energy produced by a particular power plant divided by the energy needed to build, operate, maintain, and decommission the plant. This is a concept that owes its origin to animal ecology: a cheetah must get more energy from consuming his prey than expended on catching it, otherwise it will die. If the animal is to breed and nurture the next generation then the ratio of energy obtained from energy expended has to be higher, depending on the details of energy expenditure on these other activities. Weißbach et al. 23 have analysed the EROI for a number of forms of energy production and their principal conclusion is that nuclear, hydro-, and gas- and coal-fired power stations have an EROI that is much greater than wind, solar photovoltaic (PV), concentrated solar power in a desert or cultivated biomass: see Fig. 2. In human terms, with an EROI of 1, we can mine fuel and look at it—we have no energy left over. To get a society that can feed itself and provide a basic educational system we need an EROI of our base-load fuel to be in excess of 5, and for a society with international travel and high culture we need EROI greater than 10. The new renewable energies do not reach this last level when the extra energy costs of overcoming intermittency are added in. In energy terms the current generation of renewable energy technologies alone will not enable a civilized modern society to continue!

Successful new technologies improve the lot of mankind

I have already referred to the use of Watt’s steam energy as a source of energy to improve harvesting, greatly aiding agricultural productivity. Notice too that the windmills of Europe stopped turning: the new source of energy was compact, moveable, reliable, available when needed, and of relatively low maintenance. This differential has widened ever since, and the recent windmills do not greatly close the gap in practical utility or cost. Later in the 19th century, electricity from steam turbines became available to lighten the darkness, power an increasing range of machinery, and increase the productivity of mankind to the extent that can be seen today when one contrasts an industrial city with a remote off-grid rural community. It is this energy which has underpinned the ability to improve sanitation, transport goods, and allow modern communications and advanced healthcare. During the 20th century, jet engines greatly reduced the time taken to get between two distant places, with semiconductor technologies eliminating that time with virtual presence anywhere anytime. The genetic engineering technologies have greatly speeded up the processes of plant breeding and the recent green revolution means that the larger population of the world now is better fed than ever before. The remaining areas of starvation are universally associated with war, and/or bad governance interfering with supply chains.

R&D in new technologies is a good use of public money

Really new technologies often span several existing sectors of private industry which would benefit or lose out if the new technology were introduced: coachmen in the age of buses and trains, pigeon carriers in the age of telegraph. It is difficult to have foreseen the rise of electronics to its current pervasive state if governments around the world had not supported relevant R&D in the early stages. Today the global R&D budget exceeds $1T, and the public purse contributes much of that. 33 In many advanced countries, there is significant public support of private R&D in the perceived total public interest, an interest that is not the particular focus of any one company in the private sector. The analysis of the origin of Apple’s technologies is an exemplary case of the private capture of public investment. 34

In the last 40 years, there has continued to be public-good R&D undertaken in many countries on new energy technologies that have given rise to the first generation of renewables. The support goes well down the development channel as well as the background research. This is because the private risk of initial small scale deployment to test the effectiveness of a new technology is often too high for a single company or consortium to bear. The USA has the most effective ecosystem of innovation in the world, eclipsed only for a brief period in the 1980s by Japan. 

Premature roll-out of immature/uneconomic technologies is a recipe for failure

The virtuous role of government funding in R&D is to be contrast with the litany of failure in recent times of subsidies in support of the premature rollout of technologies that are uneconomic and/or immature.

At its prime, the Carrizo Plain (S. California) was by far the largest photovoltaic array in the world, with 100,000 1′x 4′ photovoltaic arrays generating 5.2 megawatts at its peak. The plant was originally constructed by ARCO in 1983 and was dismantled in the late 1990s. The used panels are still being resold throughout the world.

In the late 1980s, large scale installations were made in the Mojave desert of farms of windmills and solar panels. One can see the square kilometers of green industrial dereliction by googling the phrases ‘abandoned wind farm’ and ‘abandoned solar farm’, respectively. The useful energy generated within these farms was insufficient to pay the interest on capital and to maintain production. The companies have gone bankrupt, and there is no one to decommission the infrastructure and return the sites to their pristine condition. I note that the remains are there to be mocked as an infrastructure project has gone wrong, and they will remain for decades, a modern version of the hubris of Ozymandias or the builders of the Tower of Babel. It is important to note that some (but not all 36 ) second and third generation wind and solar farms in the Mojave desert have fared and are faring better, 37 but the lesson here remains that premature roll-out of unready technology is unwise.

The primary problem is the use of public money, i.e., subsidies, to encourage the roll-out. They have a plethora of unintended consequences in the energy infrastructure sector. During the economic crisis of 2008/9 many of the subsidies were reduced or withdrawn in the USA. Many small companies went bankrupt. This has continued with subsidy reductions in the UK, Germany, Spain and elsewhere with further bankruptcies in the alternative energy sector. 38 Indeed there is an index for the stock value of alternative energy companies, RENIXX, that lost 80% of its value between 2008 and 2013, although it has recovered a little of that fall more recently. It is certainly not the place for pension fund investments: if the market were mature and stable, a 40-year programme to renew the global energy infrastructure should be the place for pension funds. 39 The reason so far for these failures is that the technologies are uneconomic over their lifecycles and immature in terms of the energy return on their investment (as in section “Energy return on investment (EROI)” above). In China, public subsidies continue with solar panels being sold at about a 30% loss on the cost of production. 40 That is a political strategy at work rather than an industrial strategy. In democracies, there is unlikely to be multiparty, multigovernment consensus lasting for the multidecadal timescales implied by major infrastructure change.

There is an unintended and unwanted social consequence of the roll out of these new technologies. There is ample evidence in the UK of increasing fuel poverty (i.e., household spending over 10% of disposable income keeping warm in winter) in the regions of wind farm deployment where higher electricity bills are needed to cover the rent of the land (from usually already rich) landowners, a direct reversal of the process whereby cheap energy over the last century has lifted a significant fraction of the world’s poor from their poverty. 41 Renewable energy supplements are viewed as socially divisive.

Technology breakthroughs are not pre-programmable

When public commentators such as Thomas L Friedman enter the debate about energy technologies, they urge more research to produce a breakthrough energy technology, in his case, a ‘plentiful supply of clean green cheap electrons’. 43 It is salutary to realize that all but two of the energy technologies used today have counterparts in biblical times, the only newcomers being nuclear energy and solar photovoltaics. The delivery of coal, gas, wind, water, and solar energy may be quite different today from then but the underlying principles of operation have not changed. Since nuclear fusion was first demonstrated, there has been a 60-year effort to tame it for a source of electrical energy, but so far without success. One can ask the experts whether they might have made more progress with more money, but the challenges have remained profound. Even if there were a breakthrough tomorrow in the basic processes, it would still take of order 40 years (rather than 20 in my opinion) to complete the further engineering and technology work and deploy fusion reactions to be able to provide (say) 10% of the world’s electricity. We must get to 2050 without it.

Finance is limited, so actions at scale must be prioritized

The sum involved in renewing the energy infrastructure in the UK is about £200B over the next decade. 46 A large element of this cost is to make good the lack of infrastructure investment over the last 20 years since a privatized energy market was introduced. In addition the large scale modification of the grid to cope with multiple renewable energy inputs has to be included. There remains a dispute in the public domain as to where these costs lie. The grid as we know it in major industrial countries has evolved over a period of 100 years on the basis of a relatively few large sources of energy connected to the grid which circulates power to substations from which it is transmitted to individual end-users in a broadcast mode. With multiple small and independent sources of energy from wind and solar installations, the grid topology has to change to cope with this very different quality of energy. The conventional suppliers of energy say that they should not have to cover these extra costs which should be book-kept with the renewable energies in the overall balance sheet of costs. A similar book-keeping problem arises with the costs of back-up to intermittent renewable energies. The combined cycle gas turbine generators that have delivered base load electricity to the grid in Germany are now being asked to act in back-up mode, with frequent acceleration and deceleration of the turbines, for which purpose they were not designed and they shorten the in-service life time as a result. 47 The cost analyses of future energy are bedevilled by the assignments of additional consequential costs. In practice as consumers, we buy the energy provided by electricity, blind to the particular way it is produced.

The scale of the costs of these energy bills is such that one cannot make mistakes in infrastructure investment decisions. A wrong investment is a missed opportunity on a large scale.

Finally, it is as well to remember that there are only ever two sources of payment, the consumer and the taxpayer, and the only issue at stake between them is the directness with which costs are recovered. 

The way forward

It is surely time to review the current direction of the decarbonization project which can be assumed to start in about 1990, the reference point from which carbon dioxide emission reductions are measured. No serious inroads have been made into the lion’s share of energy that is fossil fuel based. Some moves represent total madness. The closure of all but one of the aluminium smelters that used gas-fired electricity in the UK (because of rising electricity costs from the green tariffs that are over and above any global background fossil fuel energy costs) reduces our nation’s carbon dioxide emissions. 62 However, the aluminium is now imported from China where it is made with more primitive coal-based sources of energy, making the global problem of emissions worse! While the UK prides itself in reducing indigenous carbon dioxide emissions by 20% since 1990, the attribution of carbon emissions by end use shows a 20% increase over the same period. 63

It is also clear that we must de-risk all energy infrastructure projects over the next two decades. While the level of uncertainty remains high, the ‘insurance policy’ justification of urgent large-scale intervention is untenable, and we do not pay premiums if we would go bankrupt as a consequence. Certain things we do not insure against, such as a potential future mega-tsunami, 64 or a supervolcano, 65 or indeed a meteor strike, even though there have been over 20 of these since 2000 with the local power of the Hiroshima bomb! 66 Using a significant fraction of the global GDP to possibly capture the benefits of a possibly less troublesome future climate leaves more urgent actions not undertaken.

Two important points remain. The first is that there is no alternative to business as usual carrying on, with one caveat expressed in the following paragraph. Since energy use has a cost, it is normal business practice to minimize energy use, by increasing energy efficiency (see especially the recent improvement in automobile performance), 67 using less resource material and more effective recycling. These drivers have become more intense in recent years, but they were always there for a business trying to remain competitive.

The second is that, over the next two decades, the single place where the greatest impact on carbon dioxide emissions can be achieved is in the area of personal behaviour. Its potential dwarfs that of new technology interventions. Within the EU over the last 40 years there has been a notable change in public attitudes and behaviour in such diverse arenas as drinking and driving, smoking in public confined spaces, and driving without a seatbelt. If society’s attitude to the profligate consumption of any materials and resources including any forms of fuel and electricity was to regard this as deeply antisocial, it has been estimated we could live something like our present standard of living on half the energy consumption we use today in the developed world. 68 This would mean fewer miles travelled, fewer material possessions, shorter supply chains, and less use of the internet. While there is no public appetite to follow this path, the short term technology fix path is no panacea.

Conclusions

Over the last 200 years, fossil fuels have provided the route out of grinding poverty for many people in the world (but still less than half of all people) and Fig. 1 shows that this trend is certain to continue for at least the next 20 years based on the technologies of scale that are available today. A rapid decarbonization is simply impossible over the next 20 years unless the trend of a growing number who succeed to improve their lot is stalled by rich and middle class people downgrading their own standard of living. The current backlash against subsidies for renewable energy systems in the UK, EU and USA is a sign that all is not well with current renewable energy systems in meeting the aspirations of humanity.

Finally, humanity is owed a serious investigation of how we have gone so far with the decarbonization project without a serious challenge in terms of engineering reality. Have the engineers been supine and lacking in courage to challenge the orthodoxy? Or have their warnings been too gentle and dismissed or not heard? Science and politicians can take too much comfort from undoubted engineering successes over the last 200 years. When the sums at stake are on the scale of 1–10% of the world’s GDP, this is a serious business.

See also:  Climateers Tilting at Windmills Updated

Figure 12: Figure 9 with Y-scale expanded to 100% and thermal generation included, illustrating the magnitude of the problem the G20 countries still face in decarbonizing their energy sectors.

Sloppy Science + Bad Reporting = Fake Scare

 

Abusing science to incite fear is not confined to global warming/climate change. Medical science has also been debased by taking up the appeal to public alarm. The current example being exploitation of ovarian cancer, as explained by Warren Kindzierski writing in Financial Post How weaselly science and bad reporting consistently find cancer links that don’t exist  (Weaselly: Stretching facts with the use of such words as ‘this could,’ ‘can,’ ‘may,’ ‘might,’ ‘probably,’ ‘likely’ cause cancer)

Last month, the Quebec court authorized a class-action suit against two brands of baby powder that alleges that regular use of talc powder by women in their genital area is linked to a higher risk of ovarian cancer. Part of the allegations relate to claims that an ovarian cancer risk from powdered talc use is demonstrated by nearly four decades of scientific studies. Cosmetic talc has certainly been the subject of much scientific debate, study and, increasingly, legal challenge.

However, the cosmetic talc-ovarian cancer link is commonly misunderstood. Published biomedical studies cover both sides, suggesting a talc-ovarian cancer link and showing no link. Even today in prominent journals, letters to the editor — penned by scientists — rage back and forth, defending their studies or attacking the other side’s studies.

Now this is civilized, real science.

This bouncing back and forth of positive versus negative effects between talc and ovarian cancer is referred to as “vibration of effects” by John Iaonnidis, a professor of medicine and of health research and policy at Stanford University. Studies vary depending on how they are done. Why is this? Well, getting scientists to agree on important things like methods, what data to use and how to analyze and interpret effects from subtle human exposures is next to impossible. It would be no problem if one were studying cancer risks in populations receiving large exposures over long durations; but such situations are non-existent.

The truth is that the ability of any biomedical method, epidemiology included, to discriminate cancer risks in people from small exposures to a physical or chemical agent does not exist.

Most cancers are caused by a number of factors. As a result, establishing cancer causation is complex — unless a particular risk factor is overwhelming. Epidemiology studies cannot and do not realistically replicate this complexity, at least not very well. That is why the U.S. National Institutes of Health and the National Cancer Institute lists a number of key risk factors for ovarian cancer and talc is not one of them.

The institute states that it is not clear whether talc affects ovarian cancer risk. An expert U.S. cosmetic-ingredient review panel assessed the safety of cosmetic talc in 2015. It thoroughly analyzed numerous studies investigating whether or not a relationship exists between cosmetic use of talc in the perineal area and ovarian cancer. The panel determined that these studies do not support a causal link. They also agreed that there is no known physiological mechanism by which talc can plausibly migrate from the perineum to the ovaries. The news coverage of the lawsuit has been silent on that evidence.

Part of the public’s misunderstanding about talc comes from scientists offering opinions about cancer from small exposures. Too many scientists use weasel words to stretch facts: “This could,” “can,” “may,” “might,” “probably,” “likely” cause cancer. Flimsy so-called evidence from their studies that suffer from vibration of effects and their speculations are voraciously inhaled by naïve journalists. Stretched facts miraculously get reported as facts to the public — or worse, misused for litigation purposes.

The woman’s bathroom is a chemical exposure chamber with literally dozens of cosmetic products used at various times. Both skin contact and inhalation regularly occur with grooming products. However, repeated uses of small amounts of cosmetic talc or any other cosmetic product do not amount to overwhelming exposures despite the claims of some scientists and media. Overwhelming exposures — the ones that cause effects — are those that occur with laboratory rats and mice. Underwhelming exposures are what occur to people in the real world.

It is highly speculative that repeated use of small amounts of cosmetic talc is a definitive cause of ovarian cancer. It is not a definitive cause; it is only suggestive. Prominent organizations such as the U.S. National Cancer Institute and expert panels should make clear statements about such cancer risks, but they do not. Selective methods in epidemiology studies, speculation by scientists and inaccurate reporting by news media are ingredients used to transform weak suggestive evidence from underwhelming cosmetic talc exposure into something that is mistakenly claimed to be harmful for the public.

And that is why we end up with class action suits against cosmetic companies.

Warren Kindzierski is an associate professor in The School of Public Health at the University of Alberta.