Justice Alito Finds Chinks in Mann’s Legal Armor

US Supreme Justice Alito dissented from the majority opinion leaving alone a lower court ruling to allow Mann’s free speech lawsuit to proceed (after seven years).  Background on the case history is later on.  This post provides Alito’s opinion and perspective why the Supreme Court should take up the case, if not now then later after an outcome is reached.  The text comes from the US Supreme Court Orders November 25, 2019.  Excerpts in italics with my bolds.

SUPREME COURT OF THE UNITED STATES NATIONAL REVIEW, INC. 18–1451 v. MICHAEL E. MANN COMPETITIVE ENTERPRISE INSTITUTE, ET AL. 18–1477 v. MICHAEL E. MANN ON PETITIONS FOR WRITS OF CERTIORARI TO THE DISTRICT OF COLUMBIA COURT OF APPEALS Nos. 18–1451 and 18–1477. Decided November 25, 2019 The motions of Southeastern Legal Foundation for leave to file briefs as amicus curiae are granted. The petitions for writs of certiorari are denied.

JUSTICE ALITO, dissenting from the denial of certiorari. The petition in this case presents questions that go to the very heart of the constitutional guarantee of freedom of speech and freedom of the press: the protection afforded to journalists and others who use harsh language in criticizing opposing advocacy on one of the most important public issues of the day. If the Court is serious about protecting freedom of expression, we should grant review.

I.  Penn State professor Michael Mann is internationally known for his academic work and advocacy on the contentious subject of climate change. As part of this work, Mann and two colleagues produced what has been dubbed the “hockey stick” graph, which depicts a slight dip in temperatures between the years 1050 and 1900, followed by a sharp rise in temperature over the last century. Because thermometer readings for most of this period are not available, Mann attempted to ascertain temperatures for the earlier years based on other data such as growth rings of ancient trees and corals, ice cores from glaciers, and cave sediment cores. The hockey stick graph has been prominently cited as proof that human activity has led to global warming. Particularly after e-mails from the University of East Anglia’s Climate Research Unit were made public, the quality of Mann’s work was called into question in some quarters.

Columnists Rand Simberg and Mark Steyn criticized Mann, the hockey stick graph, and an investigation conducted by Penn State into allegations of wrongdoing by Mann. Simberg’s and Steyn’s comments, which appeared in blogs hosted by the Competitive Enterprise Institute and National Review Online, employed pungent language, accusing Mann of, among other things, “misconduct,” “wrongdoing,” and the “manipulation” and “tortur[e]” of data. App. to Pet. for Cert. in No. 18–1451, pp. 94a, 98a (App.).

Mann responded by filing a defamation suit in the District of Columbia’s Superior Court. Petitioners moved for dismissal, relying in part on the District’s anti-SLAPP statute, D. C. Code §16–5502(b) (2012), which requires dismissal of a defamation claim if it is based on speech made “in furtherance of the right of advocacy on issues of public interest” and the plaintiff cannot show that the claim is likely to succeed on the merits. The Superior Court denied the motion, and the D. C. Court of Appeals affirmed. 150 A. 3d 1213, 1247, 1249 (2016). The petition now before us presents two questions:

(1) whether a court or jury must determine if a factual connotation is “provably false” and

(2) whether the First Amendment permits defamation liability for expressing a subjective opinion about a matter of scientific or political controversy. Both questions merit our review.

II.  The first question is important and has divided the lower courts. See 1 R. Smolla, Law of Defamation §§6.61, 6.62, 6.63 (2d ed. 2019); 1 R. Sack, Defamation §4:3.7 (5th ed. 2019). Federal courts have held that “[w]hether a communication is actionable because it contained a provably false statement of fact is a question of law.” Chambers v. Travelers Cos., 668 F. 3d 559, 564 (CA8 2012); see also, e.g., Madison v. Frazier, 539 F. 3d 646, 654 (CA7 2008); Gray v. St. Martin’s Press, Inc., 221 F. 3d 243, 248 (CA1 2000); Moldea v. New York Times Co., 15 F. 3d 1137, 1142 (CADC 1994). Some state courts, on the other hand, have held that “it is for the jury to determine whether an ordinary reader would have understood [expression] as a factual assertion.” Good Govt. Group of Seal Beach, Inc. v. Superior Ct. of Los Angeles Cty., 22 Cal. 3d 672, 682, 586 P. 2d 572, 576 (1978); see also, e.g., Aldoupolis v. Globe Newspaper Co., 398 Mass. 731, 734, 500 N. E. 2d 794, 797 (2014); Caron v. Bangor Publishing Co., 470 A. 2d 782, 784 (Me. 1984). In this case, it appears that the D. C. Court of Appeals has joined the latter camp, leaving it for a jury to decide whether it can be proved as a matter of fact that Mann improperly treated the data in question. See App. 29a, 52a–53a, 65a, n. 46.

Respondent does not deny the existence of a conflict in the decisions of the lower courts. See Brief in Opposition at 30. Nor does he dispute the importance of the question. Instead, he argues that the D. C. Court of Appeals followed the federal rule,* but the D. C. Court of Appeals’ opinion repeatedly stated otherwise. See App. 29a (asking what “a jury properly instructed on the applicable legal and constitutional standards could reasonably find”); id., at 52a–53a (repeatedly describing what a jury “could find”); id., at 65a, —————— *Respondent’s lead argument in opposition to certiorari is that we lack jurisdiction under 28 U. S. C. §1257, see Brief in Opposition 27–30, but petitioners have a strong argument that we have jurisdiction under Cox Broadcasting Corp. v. Cohn, 420 U. S. 469 (1975). If the Court has doubts on this score, the question of jurisdiction can be considered together with the merits. 4 NATIONAL REVIEW, INC. v. MANN ALITO, J., dissenting n. 46 (stating that in a case like this one, involving what it characterized as a claim of “‘ordinary libel,’” “the standard is ‘whether a reasonable jury could find that the challenged statements were false’” (emphasis in original)). This last statement is especially revealing because it appears in a footnote that was revised in response to petitioners’ petition for rehearing, see id., at 1a, n. *, which disputed the correctness of the standard that asks what a jury could find, see id., at 65a, n. 46. We therefore have before us a decision on an indisputably important question of constitutional law on which there is an acknowledged split in the decisions of the lower courts. A question of this nature deserves a place on our docket.

This question—whether the courts or juries should decide whether an allegedly defamatory statement can be shown to be untrue—is delicate and sensitive and has serious implications for the right to freedom of expression. And two factors make the question especially important in the present case.

First, the question that the jury will apparently be asked to decide—whether petitioners’ assertions about Mann’s use of scientific data can be shown to be factually false—is highly technical. Whether an academic’s use and presentation of data falls within the range deemed reasonable by those in the field is not an easy matter for lay jurors to assess.

Second, the controversial nature of the whole subject of climate change exacerbates the risk that the jurors’ determination will be colored by their preconceptions on the matter. When allegedly defamatory speech concerns a political or social issue that arouses intense feelings, selecting an impartial jury presents special difficulties. And when, as is often the case, allegedly defamatory speech is disseminated nationally, a plaintiff may be able to bring suit in whichever jurisdiction seems likely to have the highest percentage of jurors who are sympathetic to the plaintiff ’s point of view.  See Keeton v. Hustler Magazine, Inc., 465 U. S. 770, 781 (1984) (regular circulation of magazines in forum State sufficient to support jurisdiction in defamation action). For these reasons, the first question presented in the petition calls out for review.

III.   The second question may be even more important. The constitutional guarantee of freedom of expression serves many purposes, but its most important role is protection of robust and uninhibited debate on important political and social issues. See Snyder v. Phelps, 562 U. S. 443, 451–452 (2011); New York Times Co. v. Sullivan, 376 U. S. 254, 270 (1964). If citizens cannot speak freely and without fear about the most important issues of the day, real selfgovernment is not possible. See Garrison v. Louisiana, 379 U. S. 64, 74–75 (1964) (“[S]peech concerning public affairs is more than self-expression; it is the essence of selfgovernment”). To ensure that our democracy is preserved and is permitted to flourish, this Court must closely scrutinize any restrictions on the statements that can be made on important public policy issues. Otherwise, such restrictions can easily be used to silence the expression of unpopular views.

At issue in this case is the line between, on the one hand, a pungently phrased expression of opinion regarding one of the most hotly debated issues of the day and, on the other, a statement that is worded as an expression of opinion but actually asserts a fact that can be proven in court to be false. Milkovich v. Lorain Journal Co., 497 U. S. 1 (1990). Under Milkovich, statements in the first category are protected by the First Amendment, but those in the latter are not. Id., at 19–20, 22. And Milkovich provided examples of statements that fall into each category. As explained by the Court, a defamation claim could be asserted based on the statement: “In my opinion John Jones is a liar.” Id., at 18. 6 NATIONAL REVIEW, INC. v. MANN ALITO, J., dissenting This statement, the Court noted, implied knowledge that Jones had made particular factual statements that could be shown to be false. Ibid. As for a statement that could not provide the basis for a valid defamation claim, the Court gave this example: “In my opinion Mayor Jones shows his abysmal ignorance by accepting the teachings of Marx and Lenin.” Id., at 20.

When an allegedly defamatory statement is couched as an expression of opinion on the quality of a work of scholarship relating to an issue of public concern, on which side of the Milkovich line does it fall? This is a very important question that would greatly benefit from clarification by this Court. Although Milkovich asserted that its hypothetical statement about the teachings of Marx and Lenin would not be actionable, it did not explain precisely why this was so. Was it the lack of specificity or the nature of statements about economic theories or all scholarly theories or perhaps something else?

In recent years, the Court has made a point of vigilantly enforcing the Free Speech Clause even when the speech at issue made no great contribution to public debate. For example, last Term, in Iancu v. Brunetti, 588 U. S. ___ (2019), we upheld the right of a manufacturer of jeans to register the trademark “F-U-C-T.” Two years before, in Matal v. Tam, 582 U. S. ___ (2017), we held that a rock group called “The Slants” had the right to register its name.

In earlier cases, the Court went even further. In United States v. Alvarez, 567 U. S. 709 (2012), the Court held that the First Amendment protected a man’s false claim that he had won the Congressional Medal of Honor. In Snyder, the successful party had viciously denigrated a deceased soldier outside a church during his funeral. 562 U. S., at 448–449. In United States v. Stevens, 559 U. S. 460, 466 (2010), the First Amendment claimant had sold videos of dog fights.

If the speech in all these cases had been held to be unprotected, our Nation’s system of self-government would not have been seriously threatened. But as I noted in Brunetti, 588 U. S., at ___ (slip op., at 1) (concurring opinion), the protection of even speech as trivial as a naughty trademark for jeans can serve an important purpose: It can demonstrate that this Court is deadly serious about protecting freedom of speech. Our decisions protecting the speech at issue in that case and the others just noted can serve as a promise that we will be vigilant when the freedom of speech and the press are most seriously implicated, that is, in cases involving disfavored speech on important political or social issues.

This is just such a case. Climate change has staked a place at the very center of this Nation’s public discourse. Politicians, journalists, academics, and ordinary Americans discuss and debate various aspects of climate change daily—its causes, extent, urgency, consequences, and the appropriate policies for addressing it. The core purpose of the constitutional protection of freedom of expression is to ensure that all opinions on such issues have a chance to be heard and considered.

I do not suggest that speech that touches on an important and controversial issue is always immune from challenge under state defamation law, and I express no opinion on whether the speech at issue in this case is or is not entitled to First Amendment protection. But the standard to be applied in a case like this is immensely important. Political debate frequently involves claims and counterclaims about the validity of academic studies, and today it is something of an understatement to say that our public discourse is often “uninhibited, robust, and wide-open.” New York Times Co., 376 U. S., at 270.

I recognize that the decision now before us is interlocutory and that the case may be reviewed later if the ultimate outcome below is adverse to petitioners. But requiring a free speech claimant to undergo a trial after a ruling that may be constitutionally flawed is no small burden. See Cox 8 NATIONAL REVIEW, INC. v. MANN ALITO, J., dissenting Broadcasting Corp. v. Cohn, 420 U. S. 469, 485 (1975) (observing that “there should be no trial at all” if the statute at issue offended the First Amendment). A journalist who prevails after trial in a defamation case will still have been required to shoulder all the burdens of difficult litigation and may be faced with hefty attorney’s fees. Those prospects may deter the uninhibited expression of views that would contribute to healthy public debate. For these reasons, I would grant the petition in this case, and I respectfully dissent from the denial of certiorari

From Previous Post: Courts Shielding MIchael Mann from Climate Exposure

An editorial from National Review summarizing how the courts function as Michael Mann’s protective shield  NR Won’t Be Cowed by a Litigious Michael Mann  December 21, 2018.  Excerpts below with my bolds.

At this rate, Jarndyce v. Jarndyce will be replaced in the Western canon as the go-to example of the court case that never ends by National Review, Inc. v. Michael E. Mann, which is now well into its seventh year as a live proposition and, alas, showing no end in sight.

For those who have forgotten, this is the 2012 case in which Mann sued National Review for libel over a 270-word blog post that criticized his infamous “hockey stick” graph portraying global warming, in response to which National Review refused to acquiesce to what was, and remains, nothing less than an attempt to use the law to bully the press into submission. That this case is both frivolous in nature and clear-cut in National Review’s favor seems to be obvious to everyone except for Michael Mann and the D.C. Court of Appeals. Indeed, in the years since Mann made his play, National Review has been joined by a veritable Who’s Who of American media organizations — including, but not limited to, the ACLU, the National Press Club, Comcast, the Cato Institute, the Washington Post, Time Inc., Reporters Committee for Freedom of the Press, and the Electronic Frontier Foundation, all of which have filed amicus briefs on NR’s side. Tellingly, National Review has also been supported by the City of Washington, D.C., in which jurisdiction the case was brought. And yet, inexplicably, the D.C. Court of Appeals continues to drag its feet.

This is extraordinary, especially given that at stake here is the integrity of the First Amendment. It is extraordinary foremost because National Review’s case is both straightforward and strong: that it is not, and it has never been, the role of the courts to settle literary or scientific disputes. But it is also extraordinary because National Review’s case is being heard under rules laid out by Washington, D.C.’s robust “anti-SLAPP” law, the explicit purpose of which is to make it more difficult to harass people and organizations with frivolous libel threats and thereby to protect a sturdy culture of free speech. How, we ask, can this be reconciled with a case such as ours, in which, among other inexplicable delays, the court has taken two years to add a single footnote to the records (and modify another)? That a slam-dunk case that is being examined under an expedited process should have yielded so many years of expensive radio static is a genuine national disgrace, and should be widely regarded as such.

National Review neither encourages nor enjoys protracted, expensive, tedious litigation. Indeed, it is our resolute view that questions such as these must be resolved outside of the courtroom. But we will be cowed neither by pressure nor by the passage of time, and we are proud of our role as a champion of the First Amendment. To those who would abridge, undermine, or attempt to circumvent that bulwark of free expression, our response is, as it ever was: Get Lost.

See also:  Rise and Fall of the Modern Warming Spike

US Federal Agency Lawmaking Out of Bounds

A previous post reprinted below discussed how US Supreme Justices appear ready to challenge regulatory lawmaking beyond Legislative boundaries.  This is of course at the heart of climate change overreach, as well as intrusion of Political Correctness into many other areas such as environment, health, education, immigration and so on.  Most recently Justice Kavanaugh highlighted the constitutional concern in his opinion included in US Supreme Court Orders November 25, 2019.  Excerpts in italics with my bolds.

SUPREME COURT OF THE UNITED STATES RONALD W. PAUL v. UNITED STATES ON PETITION FOR WRIT OF CERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE SIXTH CIRCUIT No. 17–8830. Decided November 25, 2019 The petition for a writ of certiorari is denied.

Statement of JUSTICE KAVANAUGH respecting the denial of certiorari. I agree with the denial of certiorari because this case ultimately raises the same statutory interpretation issue that the Court resolved last Term in Gundy v. United States, 588 U. S. ___ (2019). I write separately because JUSTICE GORSUCH’s scholarly analysis of the Constitution’s nondelegation doctrine in his Gundy dissent may warrant further consideration in future cases. JUSTICE GORSUCH’s opinion built on views expressed by then-Justice Rehnquist some 40 years ago in Industrial Union Dept., AFL–CIO v. American Petroleum Institute, 448 U. S. 607, 685–686 (1980) (Rehnquist, J., concurring in judgment). In that case, Justice Rehnquist opined that major national policy decisions must be made by Congress and the President in the legislative process, not delegated by Congress to the Executive Branch.

In the wake of Justice Rehnquist’s opinion, the Court has not adopted a nondelegation principle for major questions. But the Court has applied a closely related statutory interpretation doctrine: In order for an executive or independent agency to exercise regulatory authority over a major policy question of great economic and political importance, Congress must either: (i) expressly and specifically decide the major policy question itself and delegate to the agency the authority to regulate and enforce; or (ii) expressly and specifically delegate to the agency the authority both to decide the major policy question and to regulate and enforce. See, e.g., Utility Air Regulatory Group v. EPA, 573 U. S. 302 (2014); FDA v. Brown & Williamson Tobacco Corp., 529 U. S. 120 (2000); MCI Telecommunications Corp. v. American Telephone & Telegraph Co., 512 U. S. 218 (1994); Breyer, Judicial Review of Questions of Law and Policy, 38 Admin. L. Rev. 363, 370 (1986).

The opinions of Justice Rehnquist and JUSTICE GORSUCH would not allow that second category—congressional delegations to agencies of authority to decide major policy questions—even if Congress expressly and specifically delegates that authority. Under their approach, Congress could delegate to agencies the authority to decide less-major or fillup-the-details decisions. Like Justice Rehnquist’s opinion 40 years ago, JUSTICE GORSUCH’s thoughtful Gundy opinion raised important points that may warrant further consideration in future cases.

Previous Post:  Supremes May Rein In Agency Lawmaking

This post consists of a legal discussion regarding undesirable outcomes from some Supreme Court rulings that gave excessive deference to Executive Branch Agency regulators. Relying on the so called “Chevron Deference” can result in regulations going beyond what congress intended by their laws.

Professor Mike Rappaport writes at Law and Liberty Replacing Chevron with a Sounder Interpretive Regime. Excerpts in italics with my bolds

The Issue

The need for a new interpretive arrangement to replace Chevron is demonstrated by a climate change example cited at the end.

Importantly, this new arrangement would significantly limit agencies from using their legal discretion to modify agency statutes to combat new problems never envisioned by the enacting Congress. For example, when the Clean Air Act was passed, no one had in mind it would be addressed to anything like climate change. Yet, the EPA has used Chevron deference to change the meaning of the statute so that it can regulate greenhouse gases without Congress having to decide whether and in what way that makes sense.

Such discretion gives the EPA enormous power to pursue its own agenda without having to secure the approval of the legislative or judicial branches.

Background and Proposals

One of the most important questions within administrative law is whether the Supreme Court will eliminate Chevron deference. But if Chevron deference is eliminated, as I believe it should be, a key question is what should replace it. In my view, there is a ready alternative which makes sense as a matter of law and policy. Courts should not give agencies Chevron deference, but should provide additional weight to agency interpretations that are adopted close to the enactment of a statute or that have been followed for a significant period of time.

Chevron deference is the doctrine that provides deference to an administrative agency when it interprets a statute that it administers. In short, the agency’s interpretation will only be reversed if a court deems the interpretation unreasonable rather than simply wrong. Such deference means that the agency can select among the (often numerous) “reasonable” interpretations of a statute to pursue its agenda. Moreover, the agency is permitted to change from one reasonable interpretation to another over time based on its policy views. In conjunction with the other authorities given to agencies, such as the delegation of legislative power, Chevron deference constitutes a key part of agency power.

There is, however, a significant chance that the Supreme Court may eliminate Chevron deference. Two of the leaders of this movement are Justices Thomas and Gorsuch. But Chief Justice Roberts as well as Justices Alito and Kavanaugh have also indicated that they might be amenable to overturning Chevron. For example, in the Kisor case from this past term, which cut back on but declined to overturn the related doctrine of Auer deference, these three justices all joined opinions that explicitly stated that they thought Chevron deference was different from Auer deference, suggesting that Chevron might still be subject to overruling.

But if Chevron deference is eliminated, what should replace it? The best substitute for Chevron deference would be the system of interpretation employed in the several generations prior to the enactment of the Administrative Procedure Act. Under that system, as explained by Aditya Bamzai in his path-breaking article, judges would interpret the statute based on traditional canons of interpretation, including two—contemporaneous exposition and customary practice—that provide weight to certain agency interpretations.

Under the canon of contemporaneous exposition, an official governmental act would be entitled to weight as an interpretation of a statute (or of the Constitution) if it were taken close to the period of the enactment of the provision. This would apply to government acts by the judiciary and the legislature as well as those by administrative agencies. Thus, agency interpretations of statutes would be entitled to some additional weight if taken at the time of the statute’s enactment.

This canon has several attractive aspects. First, it has a clear connection to originalism. Contemporaneous interpretations are given added weight because they were adopted at the time of the law’s enactment and therefore are thought to be more likely to offer the correct interpretation—that is, one attuned to the original meaning. Second, this canon also promotes the rule of law by both providing notice to the public of the meaning of the statute and limiting the ability of the agency to change its interpretation of the law.

The second canon is that of customary practice or usage. Under this framework, an interpretation of a government actor in its official capacity would be entitled to weight if it were consistently followed over a period of time. Thus, the agency interpretation would receive additional weight if it became a regular practice, even if were not adopted at the time of statutory enactment.

The canon of customary practice has a number of desirable features. While it does not have a connection to originalism, it does, like contemporaneous exposition, promote the rule of law. Once a customary interpretation has taken hold, the public is better able to rely on the existing interpretation and the government is more likely to follow that interpretation.

Second, the customary interpretation may also be an attractive interpretation. That the interpretation has existed over a period of time suggests that it has not created serious problems of implementation that have led courts or the agency to depart from it. While the customary interpretation may not be the most desirable one as a matter of policy, it is unlikely to be very undesirable.

This traditional interpretive approach also responds to one of the principal criticisms of eliminating Chevron deference: that it will give significant power to a judiciary that lacks expertise and can abuse its authority. I don’t agree with this criticism, since I believe that judges are expert at interpreting statutes and are subject to less bias than agencies that exercise not merely executive power, but also judicial and legislative authority.

But even if one believed that the courts were problematic, this arrangement would leave the judiciary with much less power than a regime that provides no weight to agency interpretations. The courts would often be limited by agency interpretations that accorded with the canons—interpretations adopted when the statute was enacted or that were customarily followed. Since those interpretations would be given weight, the courts would often follow them. But while these interpretations would limit the courts, they would not risk the worst dangers of Chevron deference. This interpretive approach would not allow an agency essentially free reign to change its interpretation over time in order to pursue new programs or objectives. Once the interpretation is in place, the agency would not be able to secure judicial deference if it changed the interpretation.

Importantly, this new arrangement would significantly limit agencies from using their legal discretion to modify agency statutes to combat new problems never envisioned by the enacting Congress. For example, when the Clean Air Act was passed, no one had in mind it would be addressed to anything like climate change. Yet, the EPA has used Chevron deference to change the meaning of the statute so that it can regulate greenhouse gases without Congress having to decide whether and in what way that makes sense. Such discretion gives the EPA enormous power to pursue its own agenda without having to secure the approval of the legislative or judicial branches.

In short, if Chevron deference is eliminated, there is a traditional and attractive interpretive approach that can replace it. Hopefully, the Supreme Court will take the step it refused to take in Kisor and eliminate an unwarranted form of deference.

Not Too Cold for Nuclear Power

James Conca writes at Forbes Bitter Cold Stops Coal, While Nuclear Power Excels. Excerpts in italics with my bolds.

As I woke up to Thanksgiving yesterday, I realized we in the Pacific Northwest had been cyclone bombed for the holiday.

The Columbia Generating Station nuclear power plant just north of Richland, WA puts out maximum … [+]ENERGY NORTHWEST

A Bomb Cyclone is when the barometric pressure drops by at least 24 millibars in 24 hours. The winds were wicked last night and kept us awake a good part of the time.

But looking out over the snow, I was thankful for the comforting plume of pure water vapor rising beyond from our nearby nuclear power plant. Judith and the kitties were, too.

Through thick and thin, extreme hot or extreme cold, the nuclear plant never seems to stop producing over 9 billion kWhs of energy every year, enough to power Seattle. The same with all other nuclear plants in America.

Whether it’s coal, gas or renewables, cold weather seems to hurt them like grandpa with a bum knee. And it doesn’t help that our aging energy infrastructure keeps getting a D+ from the American Society of Civil Engineers.

Most generation systems suffer outages during extreme weather, but most of those involved fossil fuel systems. Coal stacks are frozen and diesel generators simply can’t function in such low temperatures. Gas chokes up – its pipelines can’t keep up with demand – and prices skyrocket.

Wind also suffers because the hottest and coldest months are usually the least windy.

This was seen again last week, when record-breaking cold engulfed Illinois. But Exelon Generation’s 10 operating nuclear plants kept putting out their maximum power without a hitch. Coal and gas struggled.

“Even during this unseasonably cold weather, our Illinois fleet’s performance further demonstrates the reliability and resiliency of nuclear power in any kind of weather,” Bryan Hanson, Exelon’s Chief Nuclear Officer, said. “We are dedicated to being online when customers need us most, no matter what Mother Nature throws at us at any time of year.”

The problem with widespread cold or heat starts because there is a spike in electricity and gas demand, since everyone is re-adjusting their thermostats and it takes a lot more energy to keep us at comfortable temperatures during these extremes.

Interestingly, nuclear prices do not go up – the reactors just keep running. They don’t have to worry about fuel supply – they have enough on hand for years – and they don’t have to do anything special to deal with the extreme weather.

In recent years, the cost of electricity from the nearby Columbia Generating Station has fallen to 4.2¢/kWh, regardless of weather. Many gas plants increase their prices during bad weather, as much as ten-fold in New York and New England.

A diverse energy mix is really, really important. Whether it’s massive liquid-gel batteries that would maximize renewable capacity, small modular nuclear reactors, keeping and uprating existing nuclear, better pipeline technology and monitoring, better coordination among renewables – in the coming decades, whatever we can do, we should do.

But nuclear is clearly the big guy you want to walk down the street with on a cold winter’s night.

Madrid COP25 Briefing for Realists

The upcoming COP25 will be hosted by Chile, but held in Madrid because of the backlash in Santiago against damaging effects of costly climate policies.  The gathering had been previously cancelled by a newly elected skeptical Brazilian president. The change of venue has led to “a scale down of expectations” and participation from the Chilean side, said Mónica Araya, a former lead negotiator for Costa Rica, but the presidency’s priorities are unchanged. In the wings of the Cop25 talks, hosts Spain and Chile will push governments to join a coalition of progressive nations pledging to raise their targets in response to the 2018 over-the-top IPCC SR15 climate horror movie.  See UN Horror Show

Of course Spain is the setting for the adventures of Don Quixote ( “don key-ho-tee” ) in Cervantes’ famous novel.  The somewhat factually challenged hero charged at some windmills claiming they were enemies, and is celebrated in the English language by two idioms:

Tilting at Windmills–meaning attacking imaginary enemies, and

Quixotic (“quick-sottic”)–meaning striving for visionary ideals.

It is clear that COP climateers are similary engaged in some kind of heroic quest, like modern-day Don Quixotes. The only differences: They imagine a trace gas in the air is the enemy, and that windmills are our saviors.  See Climateers Tilting at Windmills

Four years ago French Mathematicians spoke out prior to COP21 in Paris, and their words provide a rational briefing for COP25 beginning in Madrid this weekend. In a nutshell:

Fighting Global Warming is Absurd, Costly and Pointless.

  • Absurd because of no reliable evidence that anything unusual is happening in our climate.
  • Costly because trillions of dollars are wasted on immature, inefficient technologies that serve only to make cheap, reliable energy expensive and intermittent.
  • Pointless because we do not control the weather anyway.

The prestigious Société de Calcul Mathématique (Society for Mathematical Calculation) issued a detailed 195-page White Paper that presents a blistering point-by-point critique of the key dogmas of global warming. The synopsis is blunt and extremely well documented.  Here are extracts from the opening statements of the first three chapters of the SCM White Paper with my bolds and images.

Sisyphus at work.

Chapter 1: The crusade is absurd
There is not a single fact, figure or observation that leads us to conclude that the world‘s climate is in any way ‘disturbed.’ It is variable, as it has always been, but rather less so now than during certain periods or geological eras. Modern methods are far from being able to accurately measure the planet‘s global temperature even today, so measurements made 50 or 100 years ago are even less reliable. Concentrations of CO2 vary, as they always have done; the figures that are being released are biased and dishonest. Rising sea levels are a normal phenomenon linked to upthrust buoyancy; they are nothing to do with so-called global warming. As for extreme weather events — they are no more frequent now than they have been in the past. We ourselves have processed the raw data on hurricanes….

Chapter 2: The crusade is costly
Direct aid for industries that are completely unviable (such as photovoltaics and wind turbines) but presented as ‘virtuous’ runs into billions of euros, according to recent reports published by the Cour des Comptes (French Audit Office) in 2013. But the highest cost lies in the principle of ‘energy saving,’ which is presented as especially virtuous. Since no civilization can develop when it is saving energy, ours has stopped developing: France now has more than three million people unemployed — it is the price we have to pay for our virtue….

Chapter 3: The crusade is pointless
Human beings cannot, in any event, change the climate. If we in France were to stop all industrial activity (let’s not talk about our intellectual activity, which ceased long ago), if we were to eradicate all trace of animal life, the composition of the atmosphere would not alter in any measurable, perceptible way. To explain this, let us make a comparison with the rotation of the planet: it is slowing down. To address that, we might be tempted to ask the entire population of China to run in an easterly direction. But, no matter how big China and its population are, this would have no measurable impact on the Earth‘s rotation.

cg565e788a82606

Full text in pdf format is available in English at link below:

The battle against global warming: an absurd, costly and pointless crusade
White Paper drawn up by the Société de Calcul Mathématique SA
(Mathematical Modelling Company, Corp.)

societe-de-calcul-mathematique-logo-485x174-1

A Second report was published in 2016 entitled: Global Warming and Employment, which analyzes in depth the economic destruction from ill-advised climate change policies.

The two principal themes are that jobs are disappearing and that the destructive forces are embedded in our societies.

Jobs are Disappearing discusses issues such as:

The State is incapable of devising and implementing an industrial policy.

The fundamental absurdity of the concept of sustainable development

Biofuels an especially absurd policy leading to ridiculous taxes and job losses.

EU policy to reduce greenhouse gas emissions by 40% drives jobs elsewhere while being pointless: the planet has never asked for it, is completely unaware of it, and will never notice it!

The War against the Car and Road Maintenance undercuts economic mobility while destroying transportation sector jobs.

Solar and wind energy are weak, diffuse, and inconsistent, inadequate to power modern civilization.

Food production activities are attacked as being “bad for the planet.”

So-called Green jobs are entirely financed by subsidies.

The Brutalizing Whip discusses the damages to public finances and to social wealth and well-being, including these topics:

Taxes have never been so high

The Government is borrowing more and more

Dilapidated infrastructure

Instead of job creation, Relocations and Losses

The wastefulness associated with the new forms of energy

Return to the economy of an underdeveloped country

What is our predicament?
Four Horsemen are bringing down our societies:

  • The Ministry of Ecology (climate and environment);
  • Journalists;
  • Scientists;
  • Corporation Environmentalist Departments.

Steps required to recover from this demise:

  • Go back to the basic rules of research.
  • Go back to the basic rules of law
  • Do not trust international organizations
  • Leave the planet alone
  • Beware of any premature optimism

Conclusion

Climate lemmings

The real question is this: how have policymakers managed to make such absurd decisions, to blinker themselves to such a degree, when so many means of scientific investigation are available? The answer is simple: as soon as something is seen as being green, as being good for the planet, all discussion comes to an end and any scientific analysis becomes pointless or counterproductive. The policymakers will not listen to anyone or anything; they take all sorts of hasty, contradictory, damaging and absurd decisions. When will they finally be held to account?

Footnote:

The above cartoon image of climate talks includes water rising over politicians’ feet.  But actual observations made in Fiji (presiding over 2017 talks in Bonn) show sea levels are stable (link below).

Fear Not For Fiji

In 2016 SCM issued a report Global Temperatures Available data and critical analysis

It is a valuable description of the temperature metrics and issues regarding climate analysis.   They conclude:

None of the information on global temperatures is of any scientific value, and it should not
be used as a basis for any policy decisions. It is perfectly clear that:

  • there are far too few temperature sensors to give us a picture of the planet’s temperature;
  • we do not know what such a temperature might mean because nobody has given it
    any specific physical significance;
  • the data have been subject to much dissimulation and manipulation. There is a
    clear will not to mention anything that might be reassuring, and to highlight things
    that are presented as worrying;
  • despite all this, direct use of the available figures does not indicate any genuine
    trend towards global warming!

Climate Advice: Don’t Worry, Be Happer

William Happer’s Major Statement at the Best Schools Global Warming Dialogue is CO₂ will be a major benefit to the Earth. Readers can learn much from the whole document (Title is link). Excerpts in italics with my bolds.

Some people claim that increased levels of atmospheric CO2 will cause catastrophic global warming, flooding from rising oceans, spreading tropical diseases, ocean acidification, and other horrors. But these frightening scenarios have almost no basis in genuine science. This Statement reviews facts that have persuaded me that more CO2 will be a major benefit to the Earth.

Discussions of climate today almost always involve fossil fuels. Some people claim that fossil fuels are inherently evil. Quite the contrary, the use of fossil fuels to power modern society gives the average person a standard of living that only the wealthiest could enjoy a few centuries ago. But fossil fuels must be extracted responsibly, minimizing environmental damage from mining and drilling operations, and with due consideration of costs and benefits. Similarly, fossil fuels must be burned responsibly, deploying cost-effective technologies that minimize emissions of real pollutants such as fly ash, carbon monoxide, oxides of sulfur and nitrogen, heavy metals, volatile organic compounds, etc.

Extremists have conflated these genuine environmental concerns with the emission of CO2, which cannot be economically removed from exhaust gases. Calling CO2 a “pollutant” that must be eliminated, with even more zeal than real pollutants, is Orwellian Newspeak.[3] “Buying insurance” against potential climate disasters by forcibly curtailing the use of fossil fuels is like buying “protection” from the mafia. There is nothing to insure against, except the threats of an increasingly totalitarian coalition of politicians, government bureaucrats, crony capitalists, thuggish nongovernmental organizations like Greenpeace, etc.

Figure 1. The ratio, RCO2, of past atmospheric CO2 concentrations to average values (about 300 ppm) of the past few million years, This particular proxy record comes from analyzing the fraction of the rare stable isotope 13C to the dominant isotope 12C in carbonate sediments and paleosols. Other proxies give qualitatively similar results.[

Life on Earth does better with more CO2. CO2 levels are increasing

Fig. 1 summarizes the most important theme of this discussion. It is not true that releasing more CO2 into the atmosphere is a dangerous, unprecedented experiment. The Earth has already “experimented” with much higher CO2 levels than we have today or that can be produced by the combustion of all economically recoverable fossil fuels.

Radiative cooling of the Earth and The Role of Water and Clouds

Without sunlight and only internal heat to keep warm, the Earth’s absolute surface temperature T would be very cold indeed. A first estimate can be made with the celebrated Stefan-Boltzmann formula

 J= εσT^4   [Equation 1 ]

where J is the thermal radiation flux per unit of surface area, and the Stefan-Boltzmann constant (originally determined from experimental measurements) has the value σ = 5.67 × 10-8 W/(m2K4). If we use this equation to calculate how warm the surface would have to be to radiate the same thermal energy as the mean solar flux, Js = F/4 = 340 W/m2, we find Ts = 278 K or 5 C, a bit colder than the average temperature (287 K or 14 C) of the Earth’s surface,[19] but “in the ball park.”

Figure 5. The temperature profile of the Earth’s atmosphere.[20] This illustration is for mid-latitudes, like Princeton, NJ, at 40.4o N, where the tropopause is usually at an altitude of about 11 km. The tropopause is closer to 17 km near the equator, and as low as 9 km near the north and south poles.

These estimates can be refined by taking into account the Earth’s atmosphere. In the Interview we already discussed the representative temperature profile, Fig. 5. The famous “blue marble” photograph of the Earth,[21] reproduced in Fig. 6, is also very instructive. Much of the Earth is covered with clouds, which reflect about 30% of sunlight back into space, thereby preventing its absorption and conversion to heat. Rayleigh scattering (which gives the blue color of the daytime sky) also deflects shorter-wavelength sunlight back to space and prevents heating.

Today, whole-Earth images analogous to Fig. 6 are continuously recorded by geostationary satellites, orbiting at the same angular velocity as the Earth, and therefore hovering over nearly the same spot on the equator at an altitude of about 35,800 km.[23] In addition to visible images, which can only be recorded in daytime, the geostationary satellites record images of the thermal radiation emitted both day and night.

Figure 7. Radiation with wavelengths close to the 10.7 µ (1µ = 10-6m), as observed with a geostationary satellite over the western hemisphere of the Earth.[23] This is radiation in the infrared window of Fig. 4, where the surface can radiate directly to space from cloud-free regions.

Fig. 7 shows radiation with wavelengths close to 10.7 µ in the “infrared window” of the absorption spectrum shown in Fig. 4, where there is little absorption from either the main greenhouse gas, H2O, or from less-important CO2. Darker tones in Fig. 7 indicate more intense radiation. The cold “white” cloud tops emit much less radiation than the surface, which is “visible” at cloud-free regions of the Earth. This is the opposite from Fig. 6, where maximum reflected sunlight is coming from the white cloud tops, and much less reflection from the land and ocean, where much of the solar radiation is absorbed and converted to heat.

As one can surmise from Fig. 6 and Fig. 7, clouds are one of the most potent factors that control the surface temperature of the earth. Their effects are comparable to those of the greenhouse gases, H2O and CO2, but it is much harder to model the effects of clouds. Clouds tend to cool the Earth by scattering visible and near-visible solar radiation back to space before the radiation can be absorbed and converted to heat. But clouds also prevent the warm surface from radiating directly to space. Instead, the radiation comes from the cloud tops that are normally cooler than the surface. Low-cloud tops are not much cooler than the surface, so low clouds are net coolers. In Fig. 7, a large area of low clouds can be seen off the coast of Chile. They are only slightly cooler than the surrounding waters of the Pacific Ocean in cloud-free areas.

Figure 8. Spectrally resolved, vertical upwelling thermal radiation I from the Earth, the jagged lines, as observed by a satellite.[28] The smooth, dashed lines are theoretical Planck brightnesses, B, for various temperatures. The vertical units are 1 c.g.s = 1 erg/(s cm2 sr cm-1) = 1 mW/(m2 sr cm-1).

Except at the South Pole, the data of Fig. 8 show that the observed thermal radiation from the Earth is less intense than Planck radiation from the surface would be without greenhouse gases. Although the surface radiation is completely blocked in the bands of the greenhouse gases, as one would expect from Fig. 4, radiation from H2O and CO2 molecules at higher, colder altitudes can escape to space. At the “emission altitude,” which depends on frequency ν, there are not enough greenhouse molecules left overhead to block the escape of radiation. The thermal emission cross section of CO2 molecules at band center is so large that the few molecules in the relatively warm upper stratosphere (see Fig. 5) produce the sharp spikes in the center of the bands of Fig. 8. The flat bottoms of the CO2 bands of Fig 8 are emission from the nearly isothermal lower stratosphere (see Fig. 5) which has a temperature close to 220 K over most of the Earth.

It is hard for H2O molecules to reach cold, higher altitudes, since the molecules condense onto snowflakes or rain drops in clouds. So the H2O emissions to space come from the relatively warm and humid troposphere, and they are only moderately less intense than the Planck brightness of the surface. CO2 molecules radiate to space from the relatively dry and cold lower stratosphere. So for most latitudes, the CO2 band observed from space has much less intensity than the Planck brightness of the surface.

Concentrations of H2O vapor can be quite different at different locations on Earth. A good example is the bottom panel of Fig. 8, the thermal radiation from the Antarctic ice sheet, where almost no H2O emission can be seen. There, most of the water vapor has been frozen onto the ice cap, at a temperature of around 190 K. Near both the north and south poles there is a dramatic wintertime inversion[30] of the normal temperature profile of Fig. 5. The ice surface becomes much colder than most of the troposphere and lower stratosphere.

Cloud tops in the intertropical convergence zone (ITCZ) can reach the tropopause and can be almost as cold as the Antarctic ice sheet. The spectral distribution of cloud-top radiation from the ITCZ looks very similar to cloud-free radiation from the Antarctic ice, shown on the bottom panel of Fig. 8.

Convection

Radiation, which we have discussed above, is an important part of the energy transfer budget of the earth, but not the only part. More solar energy is absorbed in the tropics, near the equator, where the sun beats down nearly vertically at noon, than at the poles where the noontime sun is low on the horizon, even at midsummer, and where there is no sunlight at all in the winter. As a result, more visible and near infrared solar radiation (“short-wave radiation” or SWR) is absorbed in the tropics than is radiated back to space as thermal radiation (“long-wave radiation” or LWR). The opposite situation prevails near the poles, where thermal radiation releases more energy to space than is received by sunlight. Energy is conserved because the excess solar energy from the tropics is carried to the poles by warm air currents, and to a lesser extent, by warm ocean currents. The basic physics is sketched in Fig. 11.[35]

Figure 11. Most sunlight is absorbed in the tropics, and some of the heat energy is carried by air currents to the polar regions to be released back into space as thermal radiation. Along with energy, angular momentum — imparted to the air from the rotating Earth’s surface near the equator — is transported to higher northern and southern latitudes, where it is reabsorbed by the Earth’s surface. The Hadley circulation near the equator is largely driven by buoyant forces on warm, solar-heated air, but for mid latitudes the “Coriolis force” due to the rotation of the earth leads to transport of energy and angular momentum through slanted “baroclinic eddies.” Among other consequences of the conservation of angular momentum are the easterly trade winds near the equator and the westerly winds at mid latitudes.

Equilibrium Climate Sensitivity

If increasing CO2 causes very large warming, harm can indeed be done. But most studies suggest that warmings of up to 2 K will be good for the planet,[38] extending growing seasons, cutting winter heating bills, etc. We will denote temperature differences in Kelvin (K) since they are exactly the same as differences in Celsius (C). A temperature change of 1 K = 1 C is equal to a change of 1.8 Fahrenheit (F).

If a 50% increase of CO2 were to increase the temperature by 3.4 K, as in Arrhenius’s original estimate mentioned above, the doubling sensitivity would be S = 3.4 K/log2(1.5) = 5.8 K. Ten years later, on page 53 of his popular book, Worlds in the Making: The Evolution of the Universe,[40] Arrhenius again states the logarithmic law of warming, with a slightly smaller climate sensitivity, S = 4 K.

Convection of the atmosphere, water vapor, and clouds all interact in a complicated way with the change of CO2 to give the numerical value of the doubling sensitivity S of Eq. (21). Remarkably, Arrhenius somehow guessed the logarithmic dependence on CO2 concentration before Planck’s discovery of how thermal radiation really works.

More than a century after Arrhenius, and after the expenditure of many tens of billions of dollars on climate science, the official value of S still differs little from the guess that Arrhenius made in 1912: S = 4 K.

Could it be that the climate establishment does not want to work itself out of a job?

Overestimate of Sensitivity

Contrary to the predictions of most climate models, there has been very little warming of the Earth’s surface over the last two decades. The discrepancy between models and observations issummarized by Fyfe, Gillett, and Zwiers, as shown in the Fyfe Fig.1 above.

At this writing, more than 50 mechanisms have been proposed to explain the discrepancy of Fyfe Fig.1. These range from aerosol cooling to heat absorption by the ocean. Some of the more popular excuses for the discrepancy have been summarized by Fyfe, et al. But the most straightforward explanation for the discrepancy between observations and models is that the doubling sensitivity, which most models assume to be close to the “most likely” IPCC value, S = 3 K, is much too large.

If one assumes negligible feedback, where other properties of the atmosphere change little in response to additions of CO2, the doubling efficiency can be estimated to be about S = 1 K, for example, as we discussed in connection with Eq. (19). The much larger doubling sensitivities claimed by the IPCC, which look increasingly dubious with each passing year, are due to “positive feedbacks.” A favorite positive feedback is the assumption that water vapor will be lofted to higher, colder altitudes by the addition of more CO2, thereby increasing the effective opacity of the vapor. Changes in cloudiness can also provide either positive feedback which increases S or negative feedback which decreases S. The simplest interpretation of the discrepancy of Fig. 13 and Fig. 14 is that the net feedback is small and possibly even negative. Recent work by Harde indicates a doubling sensitivity of S = 0.6 K.[46]

Figure 17. The analysis of satellite observations by Dr. Randall J. Donohohue and co-workers[53] shows a clear greening of the earth from the modest increase of CO2 concentrations from about 340 ppm to 400 ppm from the year 1982 to 2010. The greening is most pronounced in arid areas where increased CO2 levels diminish the water requirement of plants.

Benefits of CO2

More CO2 in the atmosphere will be good for life on planet earth. Few realize that the world has been in a CO2 famine for millions of years — a long time for us, but a passing moment in geological history. Over the past 550 million years since the Cambrian, when abundant fossils first appeared in the sedimentary record, CO2 levels have averaged many thousands of parts per million (ppm), not today’s few hundred ppm, which is not that far above the minimum level, around 150 ppm, when many plants die of CO2 starvation.

All green plants grow faster with more atmospheric CO2. It is found that the growth rate is approximately proportional to the square root of the CO2 concentrations, so the increase in CO2 concentrations from about 300 ppm to 400 ppm over the past century should have increased growth rates by a factor of about √(4/3) = 1.15, or 15%. Most crop yields have increased by much more than 15% over the past century. Better crop varieties, better use of fertilizer, better water management, etc., have all contributed. But the fact remains that a substantial part of the increase is due to more atmospheric CO2.

But the nutritional value of additional CO2 is only part of its benefit to plants. Of equal or greater importance, more CO2 in the atmosphere makes plants more drought-resistant. Plant leaves are perforated by stomata, little holes in the gas-tight surface skin that allow CO2 molecules to diffuse from the outside atmosphere into the moist interior of the leaf where they are photosynthesized into carbohydrates.

In the course of evolution, land plants have developed finely tuned feedback mechanisms that allow them to grow leaves with more stomata in air that is poor in CO2, like today, or with fewer stomata for air that is richer in CO2, as has been the case over most of the geological history of land plants.[51] If the amount of CO2 doubles in the atmosphere, plants reduce the number of stomata in newly grown leaves by about a factor of two. With half as many stomata to leak water vapor, plants need about half as much water. Satellite observations like those of Fig. 17 from R.J. Donohue, et al.,[52] have shown a very pronounced “greening” of the Earth as plants have responded to the modest increase of CO2 from about 340 ppm to 400 ppm during the satellite era. More greening and greater agricultural yields can be expected as CO2 concentrations increase further.

Climate Science

Droughts, floods, heat waves, cold snaps, hurricanes, tornadoes, blizzards, and other weather- and climate-related events will complicate our life on Earth, no matter how many laws governments pass to “stop climate change.” But if we understand these phenomena, and are able to predict them, they will be much less damaging to human society. So I strongly support high-quality research on climate and related fields like oceanography, geology, solar physics, etc. Especially important are good measurement programs like the various satellite measurements of atmospheric temperature[59] or the Argo[60] system of floating buoys that is revolutionizing our understanding of ocean currents, temperature, salinity, and other important properties.

But too much “climate research” money is pouring into very questionable efforts, like mitigation of the made-up horrors mentioned above. It reminds me of Gresham’s Law: “Bad money drives out good.”[61] The torrent of money showered on scientists willing to provide rationales, however shoddy, for the war on fossil fuels, and cockamamie mitigation schemes for non-existent problems, has left insufficient funding for honest climate science.

Summary

The Earth is in no danger from increasing levels of CO2. More CO2 will be a major benefit to the biosphere and to humanity. Some of the reasons are:

  • As shown in Fig. 1, much higher CO2 levels than today’s prevailed over most last 550 million years of higher life forms on Earth. Geological history shows that the biosphere does better with more CO2.
  • As shown in Fig. 13 and Fig. 14, observations over the past two decades show that the warming predicted by climate models has been greatly exaggerated. The temperature increase for doubling CO2 levels appears to be close to the feedback-free doubling sensitivity of S =1 K, and much less than the “most likely” value S = 3 K promoted by the IPCC and assumed in most climate models.
  • As shown in Fig. 12, if CO2 emissions continue at levels comparable to those today, centuries will be needed for the added CO2 to warm the Earth’s surface by 2 K, generally considered to be a safe and even beneficial amount.
  • Over the past tens of millions of years, the Earth has been in a CO2 famine with respect to the optimal levels for plants, the levels that have prevailed over most of the geological history of land plants. There was probably CO2 starvation of some plants during the coldest periods of recent ice ages. As shown in Fig. 15–17, more atmospheric CO2 will substantially increase plant growth rates and drought resistance.

There is no reason to limit the use of fossil fuels because they release CO2 to the atmosphere. However, fossil fuels do need to be mined, transported, and burned with cost-effective controls of real environmental problems — for example, fly ash, oxides of sulfur and nitrogen, volatile organic compounds, groundwater contamination, etc.

Sometime in the future, perhaps by the year 2050 when most of the original climate crusaders will have passed away, historians will write learned papers on how it was possible for a seemingly enlightened civilization of the early 21st century to demonize CO2, much as the most “Godly” members of society executed unfortunate “witches” in earlier centuries.

Dr. William Happer Background: Co-Founder and current Director of the CO2 Coalition, Dr. William Happer, Professor Emeritus in the Department of Physics at Princeton University, is a specialist in modern optics, optical and radiofrequency spectroscopy of atoms and molecules, radiation propagation in the atmosphere, and spin-polarized atoms and nuclei.

From September 2018 to September 2019, Dr. Happer served as Deputy Assistant to the President and Senior Director of Emerging Technologies on the National Security Council.

He has published over 200 peer-reviewed scientific papers. He is a Fellow of the American Physical Society, the American Association for the Advancement of Science, and a member of the American Academy of Arts and Sciences, the National Academy of Sciences and the American Philosophical Society. He was awarded an Alfred P. Sloan Fellowship in 1966, an Alexander von Humboldt Award in 1976, the 1997 Broida Prize and the 1999 Davisson-Germer Prize of the American Physical Society, and the Thomas Alva Edison Patent Award in 2000.

 

 

Beware Deep Electrification Policies

It is becoming fashionable on the left coasts to banish energy supply for equipment running on fossil fuels. For example consider recent laws prohibiting gas line home hookups. Elizabeth Weise published at USA Today No more fire in the kitchen: Cities are banning natural gas in homes to save the planet. Excerpts in italics with my bolds.

Fix global warming or cook dinner on a gas stove?

That’s the choice for people in 13 cities and one county in California and one town in Massachusetts that have enacted new zoning codes encouraging or requiring all-electric new construction.

The codes, most of them passed since June, are meant to keep builders from running natural gas lines to new homes and apartments, with an eye toward creating fewer legacy gas hookups as the nation shifts to carbon-neutral energy sources.

The most recent came on Wednesday when the town meeting in Brookline, Massachusetts, approved a rule prohibiting installation of gas lines into major new construction and in gut renovations.

For proponents, it’s a change that must be made to fight climate change. For natural gas companies, it’s a threat to their existence. And for some cooks who love to prepare food with flame, it’s an unthinkable loss.

Another Dangerous Idea that Doesn’t Scale

Once again activists seize upon an idea that doesn’t scale up to the challenge they have imagined. Add this to other misguided climate policies devoted to restricting use of fossil fuels. Apart from dictating consumer’s choices for Earth’s sake, the push could well backfire for other reasons. Jude Clemente writes at Forbes ‘Deep Electrification’ Means More Natural Gas. Excerpts in italics with my bolds. It’s a warning to authorities about outlawing traditional cars, cooking and heating equipment thereby putting all their eggs in the electric energy basket.

For environmental reasons, there’s an ongoing push to “electrify everything,” from cars to port operations to heating.

The idea is that a “deep electrification” will help lower greenhouse gas emissions and combat climate change.

The reality, however, is that more electrification will surge the need for electricity, an obvious fact that seems to be getting forgotten.

The majority of this increase occurs in the transportation sector: electric cars can increase home power usage by 50% or more.

The U.S. National Renewable Energy Laboratory (NREL) says that “electrification has the potential to significantly increase overall demand for electricity.”

NREL reports that a “high” electrification scenario would up our power demand by around 40% through 2050.

A high electrification scenario would grow our annual power consumption by 80 terawatt hours per year.

For comparison, that is like adding a Colorado and Massachusetts of new demand each year.

The Electric Power Research Institute (EPRI) confirms that electrification could boom our power demand by over 50%.

From load shifting to higher peak demand, deep electrification will present major challenges for us.

At around 4,050 terawatt hours, U.S. power demand has been flat over the past decade since The Great Recession.

Ultimately, much higher electricity demand favors all sources of electricity, a “rising tide lifts all boats” sort of thing.

But in particular, it favors gas because gas supplies almost 40% of U.S. electricity generation, up from 20% a decade ago.

Gas is cheap, reliable, flexible, and backups intermittent wind and solar.

In fact, even over the past decade with flat electricity demand, U.S. gas used for electricity has still managed to balloon 60% to 30 Bcf/d.

At 235,000 MW, the U.S. Department of Energy has gas easily adding the most power capacity in the decades ago.

Electrification and more electricity needs show how we demand realistic energy policies.

As the heart of our electric power system, natural gas will surely remain essential.

Indeed, EPRI models that U.S. gas usage increases under “all” electrification scenarios even if gas prices more than double to $6.00 per MMBtu.

Some are forgetting that the clear growth sectors for the U.S. gas industry are a triad, in order: LNG exports, electricity, manufacturing.

The industry obviously knows, for instance, that the residential sector hasn’t seen any gains in gas demand in 50 years.

Flat for a decade, U.S. power demand is set to boom as environmental goals push us to “electrify … [+]DATA SOURCE: NREL; JTC

Footnote:  There is the further unreality of replacing thermal or nuclear power plants with renewables.

The late David MacKay showed that the land areas needed to produce 225 MW of power were very different: 15 acres for a small modular nuclear reactor, 2400 acres for average solar cell arrays, and 60,000 acres for an average wind farm.

Gray area required for wind farms, yellow area for solar farms, to power London UK.

Michael Kelly also estmated the load increase from electifying transportation and heating.

Note that if such a conversion of transport fuel to electricity were to take place, the grid capacity would have to treble from what we have today.

But in fact it is the heating that is the real problem. Today that is provided by gas, with gas flows varying by a factor of eight between highs in winter and lows in summer. If heat were to be electrified along with transport, the grid capacity would have to be expanded by a factor between five and six from today.

See also Kelly’s Climate Clarity

Your emails are ruining the environment: study

,

Your pointless emails are aren’t just boring people — they are ruining the environment.

Sending email has such a high carbon footprint that just cutting out a single email a day — such as ones that simply say “LOL” — could have the same effect as removing thousands of cars from the street, according to a new study of habits in the UK.

The study, commissioned by OVO Energy, England’s leading energy supply company, used the UK as a case study and found that one less “thank you” email a day would cut 16,433 tons of carbon caused by the high-energy servers used to send the online messages.

That’s the equivalent of 81,152 flights to Madrid or taking 3,334 diesel cars off the road, the research said.

According to the research, more than 64 million “unnecessary emails” are sent every day in the UK, contributing to 23,475 tons of carbon a year to its footprint.

The top 10 most “unnecessary” emails include: “Thank you,” “Thanks,” “Have a good weekend,” “Received,” “Appreciated,” “Have a good evening,” “Did you get/see this,” “Cheers,” “You too,” and “LOL,” according to the study.

OVO Energy is now calling for tech-savvy folks to “think before you thank” in order to save more than 16,433 tons of carbon per year.

The research revealed that 71 percent of Brits wouldn’t mind not receiving a “thank you” email “if they knew it was for the benefit of the environment and helping to combat the climate crisis.”

A total of 87 percent of the UK “would be happy to reduce their email traffic to help support the same cause,” according to the study.

One of the researchers, Mike Berners-Lee, a professor at Lancaster University in Lancashire, England, said in a statement: “Whilst the carbon footprint of an email isn’t huge, it’s a great illustration of the broader principle that cutting the waste out of our lives is good for our wellbeing and good for the environment.”

“Every time we take a small step towards changing our behavior, be that sending fewer emails or carrying a reusable coffee cup, we need to treat it as a reminder to ourselves and others that we care even more about the really big carbon decisions,” Berners-Lee said..

Comment:  I kept pinching myself reading this article, sure that it must be satire from the Onion or Babylon Bee.  But no, it is published without tongue in cheek at NY Post, not unsually prone to political correctness.  Your emails are ruining the environment: study

If this is what passes for news, especially in a journal that is willing to question conventional thinking, how can we create anything more preposterous to make fun of it?  I am dumbfounded.

Footnote:  I hope tweets are alright.
Pat Sajak

When someone says (and they will), “O.K. Boomer,”  your response can be, “O.K. Junior, or should I say O.K. Young Whippersnapper (Google it).”

Finding Lost Continents, Like Zealandia

What Are Lost Continents, and Why Are We Discovering So Many? Is published at The Conversation by By Simon Williams, Joanne Whittaker & Maria Seton.  Excerpts in italics with my bolds.

For most people, continents are Earth’s seven main large landmasses.

But geoscientists have a different take on this. They look at the type of rock a feature is made of, rather than how much of its surface is above sea level.

In the past few years, we’ve seen an increase in the discovery of lost continents. Most of these have been plateaus or mountains made of continental crust hidden from our view, below sea level.

One example is Zealandia, the world’s eighth continent that extends underwater from New Zealand.
Several smaller lost continents, called microcontinents, have also recently been discovered submerged in the eastern and western Indian Ocean.

But why, with so much geographical knowledge at our fingertips, are we still discovering lost continents in the 21st century?

Down to the details
There are many mountains and plateaus below sea level scattered across the oceans, and these have been mapped from space. They are the lighter blue areas you can see on Google Maps.

However, not all submerged features qualify as lost continents. Most are made of materials quite distinct from what we traditionally think of as continental rock, and are instead formed by massive outpourings of magma.

A good example is Iceland which, despite being roughly the size of New Zealand’s North Island, is not considered continental in geological terms. It’s made up mainly of volcanic rocks deposited over the past 18 million years, meaning it’s relatively young in geological terms.

The only foolproof way to tell the difference between massive submarine volcanoes and lost continents is to collect rock samples from the deep ocean.

Finding the right samples is challenging, to say the least. Much of the seafloor is covered in soft, gloopy sediment that obscures the solid rock beneath.

We use a sophisticated mapping system to search for steep slopes on the seafloor, that are more likely to be free of sediment. We then send a metal rock-collecting bucket to grab samples.

The more we explore and sample the depths of the oceans, the more likely we’ll be to discover more lost continents.

The ultimate lost continent
Perhaps the best known example of a lost continent is Zealandia. While the geology of New Zealand and New Caledonia have been known for some time, it’s only recently their common heritage as part of a much larger continent (which is 95% underwater) has been accepted.

This acceptance has been the culmination of years of painstaking research, and exploration of the geology of deep oceans through sample collection and geophysical surveys.


New discoveries continue to be made.

During a 2011 expedition, we discovered two lost continental fragments more than 1,000km west of Perth.

The granite lying in the middle of the deep ocean there looked similar to what you would find around Cape Leeuwin, in Western Australia.

Other lost continents
However, not all lost continents are found hidden beneath the oceans.

Some existed only in the geological past, millions to billions of years ago, and later collided with other continents as a result of plate tectonic motions.

Folded marine sediments on the Whangaparaoa Peninsula north of Auckland, New Zealand, reflecting the formation of a convergent plate boundary in northern New Zealand in the beginning of the Miocene Period, around 23 million years ago. Adriana Dutkiewicz, Author provided

Their only modern-day remnants are small slivers of rock, usually squished up in mountain chains such as the Himalayas. One example is Greater Adria, an ancient continent now embedded in the mountain ranges across Europe.

Due to the perpetual motion of tectonic plates, it’s the fate of all continents to ultimately reconnect with another, and form a supercontinent.

But the fascinating life and death cycle of continents is the topic of another story.

 

No, CO2 Doesn’t Drive the Polar Vortex (Updated)

Simulation of jet stream pattern July 22. (VentuSky.com)

We are heading into winter this year at the bottom of a solar cycle, and ocean oscillations due for cooling phases. The folks at Climate Alarm Central (CAC) are well aware of this, and are working hard so people won’t realize that global cooling contradicts global warming. No indeed, contortionist papers and headlines are warning us all that CO2 not only causes hothouse earth, overrun with rats and other vermin. CO2 also causes ice ages when it feels like it.

Update Nov. 26, 2019: Much ado about the polar jet stream recently with a publication by Tim Woolings  A battle for the jet stream is raging above our heads.  The Claims are not new:

The jet has always varied – and has always affected our weather patterns. But now climate change is affecting our weather too. As I explore in my latest book, it’s when the wanderings of the jet and the hand of climate change add up that we get record-breaking heatwaves, floods and droughts – but not freezes.

The same supposition was made last year in an article by alarmist Jason Samenow at Washington Post.  Study: Freak summer weather and wild jet-stream patterns are on the rise because of global warming. Excerpts in italics with my bolds

In many ways, the summer of 2018 marked a turning point, when the effects of climate change — perhaps previously on the periphery of public consciousness — suddenly took center stage. Record high temperatures spread all over the Northern Hemisphere. Wildfires raged out of control. And devastating floods were frequent.

Michael Mann, climate scientist at Pennsylvania State University, along with colleagues, has published a new study that connects these disruptive weather extremes with a fundamental change in how the jet stream is behaving during the summer. Linked to the warming climate, the study suggests this change in the atmosphere’s steering current is making these extremes occur more frequently, with greater intensity, and for longer periods of time.

The study projects this erratic jet-stream behavior will increase in the future, leading to more severe heat waves, droughts, fires and floods.

The jet stream is changing not only because the planet is warming up but also because the Arctic is warming faster than the mid-latitudes, the study says. The jet stream is driven by temperature contrasts, and these contrasts are shrinking. The result is a slower jet stream with more wavy peaks and troughs that Mann and his study co-authors ascribe to a process known as “quasi-resonant amplification.”

The altered jet-stream behavior is important because when it takes deep excursions to the south in the summer, it sets up a collision between cool air from the north and the summer’s torrid heat, often spurring excessive rain. But when the jet stream retreats to the north, bulging heat domes form underneath it, leading to record heat and dry spells.

The study, published Wednesday in Science Advances, finds that these quasi-resonant amplification events — in which the jet stream exhibits this extreme behavior during the summer — are predicted to increase by 50 percent this century if emissions of carbon dioxide and other greenhouse gases continue unchecked.

Whereas previous work conducted by Mann and others had identified a signal for an increase in these events, this study for the first time examined how they may change in the future using climate model simulations.

“Looking at a large number of different computer models, we found interesting differences,” said Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research and a co-author of the study, in a news release. “Distinct climate models provide quite diverging forecasts for future climate resonance events. However, on average they show a clear increase in such events.”

Although model projections suggest these extreme jet-stream patterns will increase as the climate warms, the study concluded that their increase can be slowed if greenhouse gas emissions are reduced along with particulate pollution in developing countries. “[T]he future is still very much in our hands when it comes to dangerous and damaging summer weather extremes,” Mann said. “It’s simply a matter of our willpower to transition quickly from fossil fuels to renewable energy.”

Mann has been leading the charge to blame anticipated cooling on fossil fuels, his previous attempt claiming CO2 is causing a slowdown of AMOC (part of it being the Gulf Stream), resulting in global cooling, even an ice age. The same idea underlay the scary 2004 movie Day After Tomorrow.

day-after-tomorrowOther scientists are more interested in the truth than in hype. An example is this AGU publication by D.A Smeed et al. The North Atlantic Ocean Is in a State of Reduced Overturning Excerpts in italics with my bolds.

Figure 3

Indices of subsurface temperature, sea surface height (SSH), latent heat flux (LHF), and sea surface temperature (SST). SST (purple) is plotted using the same scale as subsurface temperature (blue) in the upper panel. The upper panel shows 24 month filtered values of de‐seasonalized anomalies along with the non‐Ekman part of the AMOC. In the lower panel, we show three‐year running means of the indices going back to 1985 (1993 for the SSH index).

Changes in ocean heat transport and SST are expected to modify the net air‐sea heat flux. The changes in the total air‐sea flux (Figure S4, data obtained from the National Centers for Environmental Prediction‐National Center for Atmospheric Research reanalysis; Kalnay et al., 1996) are almost all due to the change in LHF. The third panel of Figure 3 shows the changes in LHF between the two periods. There is a strong signal with increased heat loss from the ocean over the Gulf Stream. That the area of increased heat loss coincides with the location of warming SST indicates that the changes in air‐sea fluxes are driven by the ocean.

Whilst the AMOC has only been continuously measured since 2004, the indices of SSH, heat content, SST, and LHF can be calculated farther back in time (Figure 3, bottom). Over this longer time period, all four indices are strongly correlated with one another (Table S5; correlations were calculated using the nonparametric method described in McCarthy et al., 2015). These data suggest that measurement of the AMOC at 26°N started close to a maximum in the overturning. Prior to 2007 the indices show variability on a time scale of 8 to 10 years and no trend is evident, but since 2014 all indices have had values lower than any other year since 1985.

Previous studies have shown that seasonal and interannual changes in the subtropical AMOC are forced primarily by changing wind stress mediated by Rossby waves (Zhao & Johns, 2014a, 2014b). There is growing evidence (Delworth et al., 2016; Jackson et al., 2016) that the longer‐term changes of the AMOC over the last decade are also associated with thermohaline forcing and that the changed circulation alters the pattern of ocean‐atmosphere heat exchange (Gulev et al., 2013). The role of ocean circulation in decadal climate variability has been challenged in recent years with authors suggesting that external, atmospheric‐driven changes could produce the observed variability in Atlantic SSTs (Clement et al., 2015). However, the direct observation of a weakened AMOC supports a role for ocean circulation in decadal Atlantic climate variability.

Our results show that the previously reported decline of the AMOC (Smeed et al., 2014) has been arrested, but the length of the observational record of the AMOC is still short relative to the time scales of important decadal variations that exist in the Atlantic. Understanding is therefore constantly evolving. What we identify as a changed state of the AMOC in this study may well prove to be part of a decadal oscillation superposed on a multidecadal cycle. Overlaying these oscillations is the impact of anthropogenic change that is predicted to weaken the AMOC over the next century. The continuation of measurements from the RAPID 26°N array and similar observations elsewhere in the Atlantic (Lozier et al., 2017; Meinen et al., 2013) will enable us to unravel and reveal the role of ocean circulation in the changing Atlantic climate in the coming decades.

 

Regarding the more recent attempt to link CO2 with jet stream meanderings, we have this paper providing a more reasonable assessment.  Arctic amplification: does it impact the polar jet stream?  by Valentin P. Meleshko et al.  Excerpts below in italics with my bolds.

Analysis of observation and model simulations has revealed that northward temperature gradient decreases and jet flow weakens in the polar troposphere due to global climate warming. These interdependent phenomena are regarded as robust features of the climate system. An increase of planetary wave oscillation that is attributed to Arctic amplification (Francis and Vavrus, 2012; Francis and Vavrus, 2015) has not been confirmed from analysis of observation (Barnes, 2013; Screen and Simmonds, 2013) or in our analysis of model simulations of projected climate. However, we found that GPH variability associated with planetary wave oscillation increases in the background of weakening of zonal flow during the sea-ice-free summer. Enhancement of northward heat transport in the troposphere was shown to be the main factor responsible for decrease of northward temperature gradient and weakening of the jet stream in autumn and winter. Arctic amplification provides only minor contribution to the evolution of zonal flow and planetary wave oscillation.

It has been shown that northward heat transport is the major factor in decreasing the northward temperature gradient in the polar atmosphere and increasing the planetary-scale wave oscillation in the troposphere of the mid-latitudes. Arctic amplification does not show any essential impact on planetary-scale oscillation in the mid and upper troposphere, although it does cause a decrease of northward heat transport in the lower troposphere. These results confound the interpretation of the short observational record that has suggested a causal link between recent Arctic melting and extreme weather in the mid-latitudes.

There are two additional explanations of factors causing the wavy jet stream, AKA Polar Vortex.  Dr Judah Cohen of AER has written extensively on the link between Autumn Siberian snow cover and the Arctic oscillation.  See Snowing and Freezing in the Arctic  for a more complete description of the mechanism.

Finally, a discussion with Piers Corbyn regarding the solar flux effect upon the jet stream at Is This Cold the New Normal?

Video transcript available at linked post.

Wrap Up 2019 Hurricane Season


Figure: Global Hurricane Frequency (all & major) — 12-month running sums. The top time series is the number of global tropical cyclones that reached at least hurricane-force (maximum lifetime wind speed exceeds 64-knots). The bottom time series is the number of global tropical cyclones that reached major hurricane strength (96-knots+). Adapted from Maue (2011) GRL.

This post refers to statistics for this year’s Atlantic and Global Hurricane season, now likely completed.  The chart above was updated by Ryan Maue yesterday.  A detailed report is provided by the Colorado State University Tropical Meteorology Project, directed by Dr. William Gray until his death in 2016.  More from Bill Gray in a reprinted post at the end.

The article is Summary of 2019 Atlantic Tropical Cyclone Activity and Verification of Authors’ Seasonal And Two-week Forecasts.   By Philip J. Klotzbach, Michael M. Bell, and Jhordanne Jones In Memory of William M. Gray.  Excerpts in italics with my bolds.

Summary: 

The 2019 Atlantic hurricane season was slightly above average and had a little more activity than what was predicted by our June-August updates. The climatological peak months of the hurricane season were characterized by a below-average August, a very active September, and above-average named storm activity but below-average hurricane activity in October. Hurricane Dorian was the most impactful hurricane of 2019, devastating the northwestern Bahamas before bringing significant impacts to the
southeastern United States and the Atlantic Provinces of Canada. Tropical Storm Imelda also brought significant flooding to southeast Texas.

Open image in new tab to enlarge.

The 2019 hurricane season overall was slightly above average. The season was characterized by an above-average number of named storms and a near-average number of hurricanes and major hurricanes. Our initial seasonal forecast issued in April somewhat underestimated activity, while seasonal updates issued in June, July and August, respectively, slightly underestimated overall activity. The primary reason for the underestimate was due to a more rapid abatement of weak El Niño conditions than was originally anticipated. August was a relatively quiet month for Atlantic TC activity, while September was well above-average. While October had an above-average number of named storm formations, overall Accumulated Cyclone Energy was slightly below normal.

Figure: Last 4-decades of Global and Northern Hemisphere Accumulated Cyclone Energy: 24 month running sums. Note that the year indicated represents the value of ACE through the previous 24-months for the Northern Hemisphere (bottom line/gray boxes) and the entire global (top line/blue boxes). The area in between represents the Southern Hemisphere total ACE.

Previous Post:  Bill Gray: H20 is Climate Control Knob, not CO2

William Mason Gray (1929-2016), pioneering hurricane scientist and forecaster and professor of atmospheric science at Colorado State University.

Dr. William Gray made a compelling case for H2O as the climate thermostat, prior to his death in 2016.  Thanks to GWPF for publishing posthumously Bill Gray’s understanding of global warming/climate change.  The paper was compiled at his request, completed and now available as Flaws in applying greenhouse warming to Climate Variability This post provides some excerpts in italics with my bolds and some headers.  Readers will learn much from the entire document (title above is link to pdf).

The Fundamental Correction

The critical argument that is made by many in the global climate modeling (GCM) community is that an increase in CO2 warming leads to an increase in atmospheric water vapor, resulting in more warming from the absorption of outgoing infrared radiation (IR) by the water vapor. Water vapor is the most potent greenhouse gas present in the atmosphere in large quantities. Its variability (i.e. global cloudiness) is not handled adequately in GCMs in my view. In contrast to the positive feedback between CO2 and water vapor predicted by the GCMs, it is my hypothesis that there is a negative feedback between CO2 warming and and water vapor. CO2 warming ultimately results in less water vapor (not more) in the upper troposphere. The GCMs therefore predict unrealistic warming of global temperature. I hypothesize that the Earth’s energy balance is regulated by precipitation (primarily via deep cumulonimbus (Cb) convection) and that this precipitation counteracts warming due to CO2.

Figure 14: Global surface temperature change since 1880. The dotted blue and dotted red lines illustrate how much error one would have made by extrapolating a multi-decadal cooling or warming trend beyond a typical 25-35 year period. Note the recent 1975-2000 warming trend has not continued, and the global temperature remained relatively constant until 2014.

Projected Climate Changes from Rising CO2 Not Observed

Continuous measurements of atmospheric CO2, which were first made at Mauna Loa, Hawaii in 1958, show that atmospheric concentrations of CO2 have risen since that time. The warming influence of CO2 increases with the natural logarithm (ln) of the atmosphere’s CO2 concentration. With CO2 concentrations now exceeding 400 parts per million by volume (ppm), the Earth’s atmosphere is slightly more than halfway to containing double the 280 ppm CO2 amounts in 1860 (at the beginning of the Industrial Revolution).∗

We have not observed the global climate change we would have expected to take place, given this increase in CO2. Assuming that there has been at least an average of 1 W/m2 CO2 blockage of IR energy to space over the last 50 years and that this energy imbalance has been allowed to independently accumulate and cause climate change over this period with no compensating response, it would have had the potential to bring about changes in any one of the following global conditions:

  • Warm the atmosphere by 180◦C if all CO2 energy gain was utilized for this purpose – actual warming over this period has been about 0.5◦C, or many hundreds of times less.
  • Warm the top 100 meters of the globe’s oceans by over 5◦C – actual warming over this period has been about 0.5◦C, or 10 or more times less.
  • Melt sufficient land-based snow and ice as to raise the global sea level by about 6.4 m. The actual rise has been about 8–9 cm, or 60–70 times less. The gradual rise of sea level has been only slightly greater over the last ~50 years (1965–2015) than it has been over the previous two ~50-year periods of 1915–1965 and 1865–1915, when atmospheric CO2 gain was much less.
  • Increase global rainfall over the past ~50-year period by 60 cm.

Earth Climate System Compensates for CO2

If CO2 gain is the only influence on climate variability, large and important counterbalancing influences must have occurred over the last 50 years in order to negate most of the climate change expected from CO2’s energy addition. Similarly, this hypothesized CO2-induced energy gain of 1 W/m2 over 50 years must have stimulated a compensating response that acted to largely negate energy gains from the increase in CO2.

The continuous balancing of global average in-and-out net radiation flux is therefore much larger than the radiation flux from anthropogenic CO2. For example, 342 W/m2, the total energy budget, is almost 100 times larger than the amount of radiation blockage expected from a CO2 doubling over 150 years. If all other factors are held constant, a doubling of CO2 requires a warming of the globe of about 1◦C to enhance outward IR flux by 3.7 W/m2 and thus balance the blockage of IR flux to space.

Figure 2: Vertical cross-section of the annual global energy budget. Determined from a combination of satellite-derived radiation measurements and reanalysis data over the period of 1984–2004.

This pure IR energy blocking by CO2 versus compensating temperature increase for radiation equilibrium is unrealistic for the long-term and slow CO2 increases that are occurring. Only half of the blockage of 3.7 W/m2 at the surface should be expected to go into an temperature increase. The other half (about 1.85 W/m2) of the blocked IR energy to space will be compensated by surface energy loss to support enhanced evaporation. This occurs in a similar way to how the Earth’s surface energy budget compensates for half its solar gain of 171 W/m2 by surface-to-air upward water vapor flux due to evaporation.

Assuming that the imposed extra CO2 doubling IR blockage of 3.7 W/m2 is taken up and balanced by the Earth’s surface in the same way as the solar absorption is taken up and balanced, we should expect a direct warming of only ~0.5◦C for a doubling of CO2. The 1◦C expected warming that is commonly accepted incorrectly assumes that all the absorbed IR goes to the balancing outward radiation with no energy going to evaporation.

Consensus Science Exaggerates Humidity and Temperature Effects

A major premise of the GCMs has been their application of the National Academy of Science (NAS) 1979 study3 – often referred to as the Charney Report – which hypothesized that a doubling of atmospheric CO2 would bring about a general warming of the globe’s mean temperature of 1.5–4.5◦C (or an average of ~3.0◦C). These large warming values were based on the report’s assumption that the relative humidity (RH) of the atmosphere remains quasiconstant as the globe’s temperature increases. This assumption was made without any type of cumulus convective cloud model and was based solely on the Clausius–Clapeyron (CC) equation and the assumption that the RH of the air will remain constant during any future CO2-induced temperature changes. If RH remains constant as atmospheric temperature increases, then the water vapor content in the atmosphere must rise exponentially.

With constant RH, the water vapor content of the atmosphere rises by about 50% if atmospheric temperature is increased by 5◦C. Upper tropospheric water vapor increases act to raise the atmosphere’s radiation emission level to a higher and thus colder level. This reduces the amount of outgoing IR energy which can escape to space by decreasing T^4.

These model predictions of large upper-level tropospheric moisture increases have persisted in the current generation of GCM forecasts.§ These models significantly overestimate globally-averaged tropospheric and lower stratospheric (0–50,000 feet) temperature trends since 1979 (Figure 7).

Figure 8: Decline in upper tropospheric RH. Annually-averaged 300 mb relative humidity for the tropics (30°S–30°N). From NASA-MERRA2 reanalysis for 1980–2016. Black dotted line is linear trend.

All of these early GCM simulations were destined to give unrealistically large upper-tropospheric water vapor increases for doubling of CO2 blockage of IR energy to space, and as a result large and unrealistic upper tropospheric temperature increases were predicted. In fact, if data from NASA-MERRA24 and NCEP/NCAR5 can be believed, upper tropospheric RH has actually been declining since 1980 as shown in Figure 8. The top part of Table 1 shows temperature and humidity differences between very wet and dry years in the tropics since 1948; in the wettest years, precipitation was 3.9% higher than in the driest ones. Clearly, when it rains more in the tropics, relative and specific humidity decrease. A similar decrease is seen when differencing 1995–2004 from 1985–1994, periods for which the equivalent precipitation difference is 2%. Such a decrease in RH would lead to a decrease in the height of the radiation emission level and an increase in IR to space.

The Earth’s natural thermostat – evaporation and precipitation

What has prevented this extra CO2-induced energy input of the last 50 years from being realized in more climate warming than has actually occurred? Why was there recently a pause in global warming, lasting for about 15 years?  The compensating influence that prevents the predicted CO2-induced warming is enhanced global surface evaporation and increased precipitation.

Annual average global evaporational cooling is about 80 W/m2 or about 2.8 mm per day.  A little more than 1% extra global average evaporation per year would amount to 1.3 cm per year or 65 cm of extra evaporation integrated over the last 50 years. This is the only way that such a CO2-induced , 1 W/m2 IR energy gain sustained over 50 years could occur without a significant alteration of globally-averaged surface temperature. This hypothesized increase in global surface evaporation as a response to CO2-forced energy gain should not be considered unusual. All geophysical systems attempt to adapt to imposed energy forcings by developing responses that counter the imposed action. In analysing the Earth’s radiation budget, it is incorrect to simply add or subtract energy sources or sinks to the global system and expect the resulting global temperatures to proportionally change. This is because the majority of CO2-induced energy gains will not go into warming the atmosphere. Various amounts of CO2-forced energy will go into ocean surface storage or into ocean energy gain for increased surface evaporation. Therefore a significant part of the CO2 buildup (~75%) will bring about the phase change of surface liquid water to atmospheric water vapour. The energy for this phase change must come from the surface water, with an expenditure of around 580 calories of energy for every gram of liquid that is converted into vapour. The surface water must thus undergo a cooling to accomplish this phase change.

Therefore, increases in anthropogenic CO2 have brought about a small (about 0.8%) speeding up of the globe’s hydrologic cycle, leading to more precipitation, and to relatively little global temperature increase. Therefore, greenhouse gases are indeed playing an important role in altering the globe’s climate, but they are doing so primarily by increasing the speed of the hydrologic cycle as opposed to increasing global temperature.

Figure 9: Two contrasting views of the effects of how the continuous intensification of deep
cumulus convection would act to alter radiation flux to space.
The top (bottom) diagram represents a net increase (decrease) in radiation to space

Tropical Clouds Energy Control Mechanism

It is my hypothesis that the increase in global precipitation primarily arises from an increase in deep tropical cumulonimbus (Cb) convection. The typical enhancement of rainfall and updraft motion in these areas together act to increase the return flow mass subsidence in the surrounding broader clear and partly cloudy regions. The upper diagram in Figure 9 illustrates the increasing extra mass flow return subsidence associated with increasing depth and intensity of cumulus convection. Rainfall increases typically cause an overall reduction of specific humidity (q) and relative humidity (RH) in the upper tropospheric levels of the broader scale surrounding convective subsidence regions. This leads to a net enhancement of radiation flux to space due to a lowering of the upper-level emission level. This viewpoint contrasts with the position in GCMs, which suggest that an increase in deep convection will increase upper-level water vapour.

Figure 10: Conceptual model of typical variations of IR, albedo and net (IR + albedo) associated with three different areas of rain and cloud for periods of increased precipitation.

The albedo enhancement over the cloud–rain areas tends to increase the net (IR + albedo) radiation energy to space more than the weak suppression of (IR + albedo) in the clear areas. Near-neutral conditions prevail in the partly cloudy areas. The bottom diagram of Figure 9 illustrates how, in GCMs, Cb convection erroneously increases upper tropospheric moisture. Based on reanalysis data (Table 1, Figure 8) this is not observed in the real atmosphere.

Ocean Overturning Circulation Drives Warming Last Century

A slowing down of the global ocean’s MOC is the likely cause of most of the global warming that has been observed since the latter part of the 19th century.15 I hypothesize that shorter multi-decadal changes in the MOC16 are responsible for the more recent global warming periods between 1910–1940 and 1975–1998 and the global warming hiatus periods between 1945–1975 and 2000–2013.

Figure 12: The effect of strong and weak Atlantic THC. Idealized portrayal of the primary Atlantic Ocean upper ocean currents during strong and weak phases of the thermohaline circulation (THC)

Figure 13 shows the circulation features that typically accompany periods when the MOC is stronger than normal and when it is weaker than normal. In general, a strong MOC is associated with a warmer-than-normal North Atlantic, increased Atlantic hurricane activity, increased blocking action in both the North Atlantic and North Pacific and weaker westerlies in the mid-latitude Southern Hemisphere. There is more upwelling of cold water in the South Pacific and Indian Oceans, and an increase in global rainfall of a few percent occurs. This causes the global surface temperatures to cool. The opposite occurs when the MOC is weaker than normal.

The average strength of the MOC over the last 150 years has likely been below the multimillennium average, and that is the primary reason we have seen this long-term global warming since the late 19th century. The globe appears to be rebounding from the conditions of the Little Ice Age to conditions that were typical of the earlier ‘Medieval’ and ‘Roman’ warm periods.

Summary and Conclusions

The Earth is covered with 71% liquid water. Over the ocean surface, sub-saturated winds blow, forcing continuous surface evaporation. Observations and energy budget analyses indicate that the surface of the globe is losing about 80 W/m2 of energy from the global surface evaporation process. This evaporation energy loss is needed as part of the process of balancing the surface’s absorption of large amounts of incoming solar energy. Variations in the strength of the globe’s hydrologic cycle are the way that the global climate is regulated. The stronger the hydrologic cycle, the more surface evaporation cooling occurs, and greater the globe’s IR flux to space. The globe’s surface cools when the hydrologic cycle is stronger than average and warms when the hydrologic cycle is weaker than normal. The strength of the hydrologic cycle is thus the primary regulator of the globe’s surface temperature. Variations in global precipitation are linked to long-term changes in the MOC (or THC).

I have proposed that any additional warming from an increase in CO2 added to the atmosphere is offset by an increase in surface evaporation and increased precipitation (an increase in the water cycle). My prediction seems to be supported by evidence of upper tropospheric drying since 1979 and the increase in global precipitation seen in reanalysis data. I have shown that the additional heating that may be caused by an increase in CO2 results in a drying, not a moistening, of the upper troposphere, resulting in an increase of outgoing radiation to space, not a decrease as proposed by the most recent application of the greenhouse theory.

Deficiencies in the ability of GCMs to adequately represent variations in global cloudiness, the water cycle, the carbon cycle, long-term changes in deep-ocean circulation, and other important mechanisms that control the climate reduce our confidence in the ability of these models to adequately forecast future global temperatures. It seems that the models do not correctly handle what happens to the added energy from CO2 IR blocking.

Figure 13: Effect of changes in MOC: top, strong MOC; bottom weak MOC. SLP: sea level pressure; SST, sea surface temperature.

Solar variations, sunspots, volcanic eruptions and cosmic ray changes are energy-wise too small to play a significant role in the large energy changes that occur during important multi-decadal and multi-century temperature changes. It is the Earth’s internal fluctuations that are the most important cause of climate and temperature change. These internal fluctuations are driven primarily by deep multi-decadal and multi-century ocean circulation changes, of which naturally varying upper-ocean salinity content is hypothesized to be the primary driving mechanism. Salinity controls ocean density at cold temperatures and at high latitudes where the potential deep-water formation sites of the THC and SAS are located. North Atlantic upper ocean salinity changes are brought about by both multi-decadal and multi-century induced North Atlantic salinity variability.

josh-knobs

 Footnote:

The main point from Bill Gray was nicely summarized in a previous post Earth Climate Layers

The most fundamental of the many fatal mathematical flaws in the IPCC related modelling of atmospheric energy dynamics is to start with the impact of CO2 and assume water vapour as a dependent ‘forcing’.  This has the tail trying to wag the dog. The impact of CO2 should be treated as a perturbation of the water cycle. When this is done, its effect is negligible. — Dr. Dai Davies

climate-onion2