Hydroxychloroquine: A Morality Tale

Norman Doige writes in The Tablet A startling investigation into how a cheap, well-known drug became a political football in the midst of a pandemic.  Excerpts in italics with my bolds.

We live in a culture that has uncritically accepted that every domain of life is political, and that even things we think are not political are so, that all human enterprises are merely power struggles, that even the idea of “truth” is a fantasy, and really a matter of imposing one’s view on others. For a while, some held out hope that science remained an exception to this. That scientists would not bring their personal political biases into their science, and they would not be mobbed if what they said was unwelcome to one faction or another. But the sordid 2020 drama of hydroxychloroquine—which saw scientists routinely attacked for critically evaluating evidence and coming to politically inconvenient conclusions—has, for many, killed those hopes.

Phase 1 of the pandemic saw the near collapse of the credible authority of much of our public health officialdom at the highest levels, led by the exposure of the corruption of the World Health Organization. The crisis was deepened by the numerous reversals on recommendations, which led to the growing belief that too many officials were interpreting, bending, or speaking about the science relevant to the pandemic in a politicized way. Phase 2 is equally dangerous, for it shows that politicization has started to penetrate the peer review process, and how studies are reported in scientific journals, and of course in the press.

What is unique about the hydroxychloroquine discussion is that it is a story of “unwishful thinking”—to coin a term for the perverse hope that some good outcome that most sane people would earnestly desire, will never come to pass. It’s about how, in the midst of a pandemic, thousands started earnestly hoping—before the science was really in—that a drug, one that might save lives at a comparatively low cost, would not actually do so. Reasonably good studies were depicted as sloppy work, fatally flawed. Many have excelled in making counterfeit bills that look real, but few have excelled at making real bills look counterfeit. As such, as we sort this out, we shall observe not only some “tricks” about how to make bad studies look like good ones, but also how to make good studies look like bad ones. And why should anyone facing a pandemic wish to discredit potentially lifesaving medications? Well, in fact, this ability can come in very handy in this midst of a plague, when many medications and vaccines are competing to Save the World—and for the billions of dollars that will go along with that.

So this story is twofold. It’s about the discussion that unfolded (and is still unfolding) around hydroxychloroquine, but if you’re here for a definitive answer to a narrow question about one specific drug (“does hydroxychloroquine work?”), you will be disappointed. Because what our tale is really concerned with is the perilous state of vulnerability of our scientific discourse, models, and institutions—which is arguably a much bigger, and more urgent problem, since there are other drugs that must be tested for safety and effectiveness (most complex illnesses like COVID-19 often require a group of medications) as well as vaccines, which would be slated to be given to billions of people. “This misbegotten episode regarding hydroxychloroquine will be studied by sociologists of medicine as a classic example of how extra-scientific factors overrode clear-cut medical evidence,” Yale professor of epidemiology Harvey A. Risch recently argued. Why not start studying it now?

Norman Doige tells the story in some detail (see article link in red at the top)

  • the history of quinine, chloroquine, and HCQ medical effectiveness;
  • how HCQ was used against SARS CV2 early on;
  • how Raoult was the one in his lab who came up with the idea of combining the two older drugs, HCQ and azithromycin, for COVID-19;
  • the criticisms of the French studies exemplifying “unwishful thinking”;
  • Trump’s interest in HCQ and the media backlash against the medicine;
  • the failure of ICU treatment protocols with ventilators and no alternatives to off-label prescribing;
  • the insistence upon Randomized Controlled Trials (RCTs) as the only valid test for HCQ;
  • the confounding factors in such studies and the problems replicating RCT results; and,
  • the publication in high-profile journals of studies structured for HCQ to fail to help infected patients.

Conclusion from Doige

Lots and lots of COVID-19 studies will come out—several hundred are in the works. People will hope more and more accumulating numbers—and more big data—will settle it. But big data, interpreted by people who have never treated any of the patients involved can be dangerous, a kind of exalted nonsense. It’s an old lesson: Quantity is not quality.

On this, I favor the all-available-evidence approach, which understands that large studies are important, but also that the medication that might be best for the largest number of people may not be best one for an individual patient. In fact, it would be typical of medicine that a number of different medications will be needed for COVID-19, and that there will be interactions of some with patient’s existing medications or conditions, so that the more medications we have to choose from, the better. We should be giving individual clinicians on the front lines the usual latitude to take account of their individual patient’s condition, and preferences, and encourage these physicians to bring to bear everything they have learned and read (they have been trained to read studies), and continue to read, but also what they have seen with their own eyes. Unlike medical bureaucrats or others who issue decrees from remote places physicians are literally on our front lines—actually observing the patients in question, and a Hippocratic Oath to serve them—and not the Lancet or WHO or CNN.

As contentious as this debate has been, and as urgent as the need for informed and timely information seems now, the reason to understand what happened with HCQ is for what it reflects about the social context within which science is now produced:

  • a landscape overly influenced by technology and its obsession with big data abstraction over concrete, tangible human experience;
  • academics who increasingly see all human activities as “political” power games, and so in good conscience can now justify inserting their own politics into academic pursuits and reporting;
  • extraordinarily powerful pharmaceutical companies competing for hundreds of billions of dollars;
  • politicians competing for pharmaceutical dollars as well as public adoration—both of which come these days too much from social media; and,
  • the decaying of the journalistic and scholarly super-layers that used to do much better holding everyone in this pyramid accountable, but no longer do, or even can.

If you think this year’s controversy is bad, consider that hydroxychloroquine is given to relatively few people with COVID-19, all sick, many with nothing to lose. It enters the body, and leaves fairly quickly, and has been known to us for decades. COVID vaccines, which advocates will want to be mandatory and given to all people—healthy and not, young and old—are being rushed past their normal safety precautions and regulations, and the typical five-to-10-year observation period is being waived to get “Operation Warp Speed” done as soon as possible.

This is being done with the endorsement of public health officials—the same ones, in many cases who are saying HCQ is suddenly extremely dangerous.

Philosophically, and psychologically, it is a fantastic spectacle to behold, a reversal, the magnitude and the chutzpah of which must inspire awe: a public health establishment, showing extraordinary risk aversion to medications and treatments that are extremely well known, and had been used by billions, suddenly throwing caution to the wind and endorsing the rollout of treatments that are entirely novel—and about which we literally can’t possibly know anything, as regards to their long-term effects. Their manufacturers know this well themselves, which is why they have aimed for, insisted on, and have already been granted indemnification—guaranteed, by those same public health officials and government that they will not be held legally accountable should their product cause injury.

From unheard of extremes of caution and “unwishful thinking,” to unheard of extremes of risk-taking, and recklessly wishful thinking, this double standard, this about-face, is not happening because this issue of public safety is really so complex a problem that only our experts can understand it; it is happening because there is, right now, a much bigger problem: with our experts, and with the institutions that we had trusted to help solve our most pressing scientific and medical problems.

Unless these are attended to, HCQ won’t be remembered simply as that major medical issue that no one could agree on, and which left overwhelming controversy, confusion, and possibly unnecessary deaths of tens of thousands in its wake; it will be one of many in a chain of such disasters.

Norman Doidge, a contributing writer for Tablet, is a psychiatrist, psychoanalyst, and author of The Brain That Changes Itself and The Brain’s Way of Healing.

 

Kneeling to Experts Not Advisable

Taking an opinion “under advisement” means seriously considering it but retaining the independence to weigh it against other considerations.  Charles Lipson explains the importance of not bowing to expert recommendations in his article Reopening Schools and the Limits of Expertise.  Excerpts in italics with my bolds.

The last thing you want to hear from your brain surgeon (aside from “Oops”) is “Wow, I’ve always wanted to do one of these.” You’ll feel a lot better hearing, “I’ve done 30 operations like this over the past month and published several articles about them.”

Expertise like that is essential for brain surgery, building rockets, constructing skyscrapers, and much, much more. Our modern world is built upon it. We need such expert advice as we decide whether to open schools this fall, and we should turn to educators, physicians, and economists to get it. But ultimately we, as citizens and the local officials we elect, should make the choices. These are not technical decisions but political ones that incorporate technical issues and projections.

We should hold our representatives, not the experts, responsible for the choices they make.

When we listen to experts, we should remember Clint Eastwood’s comment in “Magnum Force”: “A man’s got to know his limitations.” Even the best authorities have them, and one, ironically, is that they seldom admit them, even to themselves. It is important for us both to appreciate expert advice and to recognize its limits every time we’re told to “be quiet and do what they say.” We should listen, think it over, and then make our own decisions as citizens, parents, teachers, business owners, workers, retirees — and voters.

The best way to understand why we need experts but also why we need to weigh their advice, not swallow it whole and uncooked, is to consider this illustration: Should we build a hydroelectric dam in a beautiful valley? If we construct it, we certainly need the best engineers and construction workers. We need engineering firms to project the cost and economists to project the price of its energy and potable water. Their expertise is essential.

But they cannot tell us whether it is wise to destroy California’s Hetch Hetchy Valley to build that dam. The world’s top experts on wildlife conservation and regional economic growth cannot give us the definitive answer, either. They would give us, at best, different answers, reflecting their different expertise. The conservationist would tell us it is a terrible idea to destroy such beautiful, irreplaceable habitat and kill endangered species. The economist would tell us we need the energy and fresh water if Northern California is to grow. What no economist could have predicted, decades ago, is that the entire world’s income would vastly increase because of technological advances from Silicon Valley, which had the resources needed to grow.

The hydroelectric example illustrates a more general point: complex questions involve experts in multiple fields, but there is no supra-expert to aggregate their differing advice. Even if we assume all experts within a field give similar advice, who can aggregate it across fields? No one. There is no “expert of experts.” In the example of the hydroelectric dam, the policy decision depends on how much we weigh conservation versus growth and how well we can predict future options and alternatives, such as the price of solar power or prospective growth from Palo Alto to San Jose.

Sorting out the answers is ultimately a question for voters and their representatives, not for experts in hydroelectric engineering, wildlife conservation, or regional economics. We need the best advice, but only we, as citizens, can weigh it and make a final decision. In a representative democracy, we elect officials to make those decisions. If democracy is to work, we must hold them accountable. One criticism of the growing regulatory state is that it is impossible to hold the decision makers accountable. Some of that criticism should be directed at legislators, who avoid responsibility by writing vague laws and then off-loading hard decisions onto bureaucrats and judges.

We should be especially skeptical when experts predict distant outcomes.

Their record is none too impressive. We should be skeptical, too, when laws and regulations set one definitive criterion, such as preserving the endangered snail darter, at the expense of all other considerations. That might be the best decision, or it might not, but it is ultimately a political choice. Right now, federal judges have awarded themselves extensive — and unilateral — power to make it.

These problems, which combine technical expertise and political judgment, are essential to understanding our dilemmas about reopening K-12 schools during the COVID-19 pandemic. Epidemiologists are saying, “Resuming in-person instruction too soon could spread the disease. Although children are at low risk, they will bring it home to parents and grandparents.” Pediatricians, by contrast, say it is important for children’s overall health to get them back in school. Online learning is not very effective, they say, and losing a year’s classroom instruction and socialization will be extremely harmful. Economists focus on different issues, such as parents who cannot return to full-time employment because they must care for children at home. That constraint is especially harmful to one-parent households and low-income, hourly workers, whose children also have less access to computers and fast internet connections. Notice that these experts are not the self-interested voices of interest groups such as teachers’ unions or small businesses. They are specialists in economics, education, and public health. Each has its own “silo of expertise.” Each silo produces a different answer because its experts focus on their own subset of issues and weigh them most heavily.

As we listen to these experts, we need to remember that even the best, most disinterested advice has its limitations. Reopening schools, like other big policy questions, involves multiple silos and hundreds of moving parts. It is impossible to predict what all those parts will do, how much weight to give each one, or what effects they might have, now and in the distant future. It was only from trial-and-error that we learned how inadequate online instruction really is. We entered this massive national experiment with some optimism and trudge forward with pessimism.

We should be humble about what we still don’t know.

Our success in reopening schools and businesses depends on things we cannot know with certainty. How quickly will our biotechnology companies discover effective therapeutics and vaccines? How quickly will the American population develop “herd immunity?” How soon will customers return, en masse, to shopping malls, indoor dining, and cross-country travel?

Predicting the secondary and tertiary effects of policy choices is especially hard.

Keeping businesses closed, for instance, sharply reduces local tax revenues, which probably means reducing essential services such as garbage collection and local policing. Those cuts harm public health and safety. But how much? No expert is smart enough to predict all these knock-on effects, much less aggregate them and give an overall conclusion. As it happens, experts are no better at predicting these effects than well-informed laymen. The main difference, according to studies, is that experts are more confident in their (often-wrong) predictions.

The point here is not that experts are irrelevant. We need them, and we need to pay attention to their data, logic, and conclusions. But we also need to remember that

  • Even the best current knowledge has its limits, and
  • There are no “supra-experts” to weigh the best advice from different fields and aggregate them to reach the “definitive” answer.

Sorting out this expert advice is not a technological question. It is a political one. Mayors, governors, and school boards across the country understand that crucial point as they decide whether to open schools this fall for in-person instruction. The voters understand it, too. They should listen to the experts, see what other jurisdictions decide, and check out their varied results. Then, they should walk into the voting booth and hold their representatives to account.

Charles Lipson is the Peter B. Ritzma Professor of Political Science Emeritus at the University of Chicago, where he founded the Program on International Politics, Economics, and Security.

In Praise of Science Skeptics

Pandemic Panic: Play or Quit? Only a skeptic gives you a choice.

Peter St. Onge writes at Mises Wire The COVID-19 Panic Shows Us Why Science Needs Skeptics Excerpts in italics with my bolds and images.

The dumpster fire of COVID predictions has shown exactly why it’s important to sustain and nurture skeptics, lest we blunder into scientific monoculture and groupthink. And yet the explosion of “cancel culture” intolerance of any opinion that doesn’t fit a shrinking “3 x 5 card” of right-think risks destroying the very tolerance and science that sustains our civilization.

Since World War II, America has suffered two respiratory pandemics comparable to COVID-19: the 1958 “Asian flu,” then the 1969 “Hong Kong flu.” In neither case did we shut down the economy—people were simply more careful. Not all that careful, of course—Jimi Hendrix was playing at Woodstock in the middle of the 1969 pandemic, and social distancing wasn’t really a thing in the “Summer of Love.”

And yet COVID-19 was very different thanks to a single “buggy mess” of a computer prediction from one Neil Ferguson, a British epidemiologist given to hysterical overestimates of deaths, from mad cow to bird flu to H1N1.

For COVID-19, Ferguson predicted 3 million deaths in America unless we basically shut down the economy. Panicked policymakers took his prediction as gospel, dressed as it was in the cloak of science.

Now, long after governments plunged half the world into a Great Depression, those panicked revisions are being quietly revised down by an order of magnitude, now suggesting a final tally comparable to 1958 and 1969.

COVID-19 would have been a deadly pandemic with or without Ferguson’s fantasies, but had we known the true scale and parameters of the threat we might have chosen better tailored means to both safeguard the elderly and at-risk, while sustaining the wider economy. After all, economists have long known that mass unemployment and widespread bankruptcies carry enormous health consequences that are very real to the victims suffering drained life savings, ruined businesses, broken families, widespread mental and physical health deterioration, even suicide. Decisions involve tradeoffs.

COVID-19 has illustrated the importance of free and robust inquiry. After all, panicked politicians facing media accusations of “killing grandma” aren’t in a very good position to evaluate these tradeoffs, and they need intellectual ammunition. Not only to show them which path is best, but to bolster them when a left-wing media establishment attacks.

Moreover, voters need this ammunition so they can actually tell the politicians what to do. This means two things: debate that is transparent, and debate that is tolerant of skeptics.

Transparency means data and computer code open to public scrutiny as the minimum requirement for any study that is used to justify policy, from lockdowns to carbon taxes to whatever comes next. These studies must be based on verifiable facts, code that does what it says it does, and the ensuing decision-making process must be transparent and open to the public.

One former Indian bureaucrat put it well: “Emergency situations like this pandemic should require a far higher—and not lower—level of scrutiny,” since policy choices have such tremendous impact. “This suggests a need for democracies to strengthen their critical thinking capacity by creating an independent ‘Black Hat’ institution whose purpose would be to question any technical foundations of government decisions.”

Even more important than transparency, debate must be tolerant of alternative opinions. This means ideas that are wrong, offensive, even dangerous, have to be tolerated, even celebrated. By all means, refute them—most alternative hypotheses are completely wrong, so it shouldn’t be hard to simply refute them without censorship. This, after all, is the essence of science—to generate hypotheses testable by anybody, not just licensed “experts.”

Whether we are faced with a new crisis, a new policy innovation, or simply designing a better mousetrap, groupthink and censorship are recipes for disaster and stagnation, while transparency and tolerance of new ideas are the very essence of progress. Indeed, it is largely this scientific tolerance that allowed us to rise up from the long, brutal darkness of poverty.

As Francis Bacon observed three hundred years ago, innovation and new knowledge do not come from prestigious “learned” insiders, rather progress comes from the questioner, the tinkerer, the skeptic.

Indeed, every major scientific advance challenged the “settled science” of its day, and was often denounced as pernicious and false, even dangerous. The modern blood transfusion, for example, was developed in the late 1600s, then banned for nearly a century by a hostile medical establishment, “canceling” tens of millions of lives at the altar of groupthink and hostility to skeptics.

It’s comforting to know that our problems are old ones, and also encouraging that our solution is both time-tested and simple: transparency and tolerance. After all, the very reason our culture elevates science is because it is built on a millennia-long evolutionary “battle of ideas” in which theories are constantly tested and retested in a delightfully endless search for ever better understanding.

This implies there is no such thing as “settled science”—the phrase itself is contrary to the scientific method. In reality, science is not some billion-dollar gleaming palace in Bethesda, rather it’s a gnarled mutant sewer rat that takes all comers because it’s been burned, cut, run over, crushed, run through the wood chipper, and survived. That ugly beast is our salvation, not the gleaming palace where we bow down to whichever random guy has the biggest degree in the room.

Only with free inquiry for the most unpopular, offensive, dangerous, and, yes, wrong ideas imaginable does that power sustain. And if we break that, we can expect a series of rapid catastrophes that, like failed golden ages of the past, return us to the nasty, brutish, and very short lives that have been humanity’s norm.

Whether pandemic, climate change, “institutional racism,” or whatever new crisis they conjure next, we have a fundamental right to tenaciously defend the transparency and tolerance that constitutes science itself so that it remains among humanity’s crowning achievements, and so that we preserve this golden age that would astound our ancestors.

Update: Stories vs. Facts

This post revisits a previous discussion of how public discourse is increasingly governed by stories at the expense of facts.  The recent street violence provides another example.  NYT columnist Bari Weiss provides an insider’s look at how the media produces stories instead of reports.

Bari Weiss Twitter Thread

The civil war inside The New York Times between the (mostly young) wokes the (mostly 40+) liberals is the same one raging inside other publications and companies across the country. The dynamic is always the same. (Thread.)

The Old Guard lives by a set of principles we can broadly call civil libertarianism. They assumed they shared that worldview with the young people they hired who called themselves liberals and progressives. But it was an incorrect assumption.

The New Guard has a different worldview, one articulated best by @JonHaidt and @glukianoff. They call it “safetyism,” in which the right of people to feel emotionally and psychologically safe trumps what were previously considered core liberal values, like free speech.

Perhaps the cleanest example of this dynamic was in 2018, when David Remnick, under tremendous public pressure from his staffers, disinvited Steve Bannon from appearing on stage at the New Yorker Ideas Festival. But there are dozens and dozens of examples.

I’ve been mocked by many people over the past few years for writing about the campus culture wars. They told me it was a sideshow. But this was always why it mattered: The people who graduated from those campuses would rise to power inside key institutions and transform them.

I’m in no way surprised by what has now exploded into public view. In a way, it’s oddly comforting: I feel less alone and less crazy trying to explain the dynamic to people. What I am shocked by is the speed. I thought it would take a few years, not a few weeks.

Here’s one way to think about what’s at stake: The New York Times motto is “all the news that’s fit to print.” One group emphasizes the word “all.” The other, the word “fit.”

W/r/t Tom Cotton’s oped and the choice to run it: I agree with our critics that it’s a dodge to say “we want a totally open marketplace of ideas!” There are limits. Obviously. The question is: does his view fall outside those limits? Maybe the answer is yes.

If the answer is yes, it means that the view of more than half of Americans are unacceptable. And perhaps they are. https://theweek.com/speedreads/917760/plurality-democrats-support-calling-military-aid-police-during-protests-poll-shows

“A plurality of Democrats would support calling in the U.S. military to aid police during protests,…
President Trump on Monday threatened to call in the United States military in an effort to curtail protests across the United States, and it turns out most Americans — even some of those who think the president is doing a poor job of handling the demonstrations against police brutality — would support such an action.”

Background from Previous Post

Facts vs Stories is written by Steven Novella at Neurologica. Excerpts in italics with my bolds.

There is a common style of journalism, that you are almost certainly very familiar with, in which the report starts with a personal story, then delves into the facts at hand often with reference to the framing story and others like it, and returns at the end to the original personal connection. This format is so common it’s a cliche, and often the desire to connect the actual new information to an emotional story takes over the reporting and undermines the facts.

This format reflects a more general phenomenon – that people are generally more interested in and influenced by a good narrative than by dry facts. Or are we? New research suggests that while the answer is still generally yes, there is some more nuance here (isn’t there always?). The researchers did three studies in which they compared the effects of strong vs weak facts presented either alone or embedded in a story. In the first two studies the information was about a fictitious new phone. The weak fact was that the phone could withstand a fall of 3 feet. The strong fact was that the phone could withstand a fall of 30 feet. What they found in both studies is that the weak fact was more persuasive when presented embedded in a story than alone, while the strong fact was less persuasive.

They then did a third study about a fictitious flu medicine, and asked subjects if they would give their e-mail address for further information. People are generally reluctant to give away their e-mail address unless it’s worth it, so this was a good test of how persuasive the information was. When a strong fact about the medicine was given alone, 34% of the participants were willing to provide their e-mail. When embedded in a story, only 18% provided their e-mail.  So, what is responsible for this reversal of the normal effect that stories are generally more persuasive than dry facts?

The authors suggest that stories may impair our ability to evaluate factual information.

This is not unreasonable, and is suggested by other research as well. To a much greater extent than you might think, cognition is a zero-sum game. When you allocate resources to one task, those resources are taken away from other mental tasks (this basic process is called “interference” by psychologists). Further, adding complexity to brain processing, even if this leads to more sophisticated analysis of information, tends to slow down the whole process. And also, parts of the brain can directly suppress the functioning of other parts of the brain. This inhibitory function is actually a critical part of how the brain works together.

Perhaps the most dramatic relevant example of this is a study I wrote about previously in which fMRI scans were used to study subjects listening to a charismatic speaker that was either from the subjects religion or not. When a charismatic speaker that matched the subject’s religion was speaking, the critical thinking part of the brain was literally suppressed. In fact this study also found opposite effects depending on context.

The contrast estimates reveal a significant increase of activity in response to the non-Christian speaker (compared to baseline) and a massive deactivation in response to the Christian speaker known for his healing powers. These results support recent observations that social categories can modulate the frontal executive network in opposite directions corresponding to the cognitive load they impose on the executive system.

So when listening to speech from a belief system we don’t already believe, we engaged our executive function. When listening to speech from within our existing belief system, we suppressed our executive function.

In regards to the current study, is something similar going on? Does processing the emotional content of stories impair our processing of factual information, which is a benefit for weak facts but actually a detriment to the persuasive power of strong facts that are persuasive on their own?

Another potential explanation occurs to me, however (showing how difficult it can be to interpret the results of psychological research like this). It is a reasonable premise that a strong fact is more persuasive on it’s own than a weak fact – being able to survive a 3 foot fall is not as impressive as a 30 foot fall. But, the more impressive fact may also trigger more skepticism. I may simply not believe that a phone could survive such a fall. If that fact, however, is presented in a straightforward fashion, it may seem somewhat credible. If it is presented as part of a story that is clearly meant to persuade me, then that might trigger more skepticism. In fact, doing so is inherently sketchy. The strong fact is impressive on its own, why are you trying to persuade me with this unnecessary personal story – unless the fact is BS.There is also research to support this hypothesis. When a documentary about a fringe topic, like UFOs, includes the claim that, “This is true,” that actually triggers more skepticism. It encourages the audience to think, “Wait a minute, is this true?” Meanwhile, including a scientists who says, “This is not true,” may actually increase belief, because the audience is impressed that the subject is being taken serious by a scientist, regardless of their ultimate conclusion. But the extent of such backfire effects remains controversial in psychological research – it appears to be very context dependent.

I would summarize all this by saying that – we can identify psychological effects that relate to belief and skepticism. However, there are many potential effects that can be triggered in different situations, and interact in often complex and unpredictable ways. So even when we identify a real effect, such as the persuasive power of stories, it doesn’t predict what will happen in every case. In fact, the net statistical effect may disappear or even reverse in certain contexts, because it is either neutralized or overwhelmed by another effect. I think that is what is happening here.

What do you do when you are trying to be persuasive, then? The answer has to be – it depends? Who is your audience? What claims or facts are you trying to get across? What is the ultimate goal of the persuasion (public service, education, political activism, marketing)? I don’t think we can generate any solid algorithm, but we do have some guiding rules of thumb.

First, know your audience, or at least those you are trying to persuade. No message will be persuasive to everyone.

If the facts are impressive on their own, let them speak for themselves. Perhaps put them into a little context, but don’t try to wrap them up in an emotional story. That may backfire.

Depending on context, your goal may be to not just provide facts, but to persuade your audience to reject a current narrative for a better one. In this case the research suggests you should both argue against the current narrative, and provide a replacement that provides an explanatory model.

So you can’t just debunk a myth, conspiracy theory, or misconception. You need to provide the audience with another way to make sense of their world.

When possible find common ground. Start with the premises that you think most reasonable people will agree with, then build from there.

Now, it’s not my goal to outline how to convince people of things that are not true, or that are subjective but in your personal interest. That’s not what this blog is about. I am only interested in persuading people to portion their belief to the logic and evidence. So I am not going to recommend ways to avoid triggering skepticism – I want to trigger skepticism. I just want it to be skepticism based on science and critical thinking, not emotional or partisan denial, nihilism, cynicism, or just being contrarian.

You also have to recognize that it can be difficult to persuade people. This is especially true if your message is constrained by facts and reality. Sometimes the real information is not optimized for emotional appeal, and it has to compete against messages that are so optimized (and are unconstrained by reality). But at least know the science about how people process information and form their beliefs is useful.

Postscript:  Hans Rosling demonstrates how to use data to tell the story of our rising civilization.

Bottom Line:  When it comes to science, the rule is to follow the facts.  When the story is contradicted by new facts, the story changes to fit the facts, not the other way around.

See also:  Data, Facts and Information

Media Turn Math Dopes into Dupes

Those who have investigated global warming/climate change discovered that the numbers don’t add up. But if you don’t do the math you wouldn’t know that, because in the details is found the truth (the devilish contradictions to sweeping claims). Those without numerical literacy (including apparently most journalists) are at the mercy of the loudest advocates. Social policy then becomes a matter of going along with herd popularity. Shout out to AOC!

Now we get the additional revelation regarding pandemic math and the refusal to correct over-the-top predictions. It’s the same dynamic but accelerated by the more immediate failure of models to forecast contagious reality. Sean Trende writes at Real Clear Politics The Costly Failure to Update Sky-Is-Falling Predictions. Excerpts in italics with my bolds.

On March 6, Liz Specht, Ph.D., posted a thread on Twitter that immediately went viral. As of this writing, it has received over 100,000 likes and almost 41,000 retweets, and was republished at Stat News. It purported to “talk math” and reflected the views of “highly esteemed epidemiologists.” It insisted it was “not a hypothetical, fear-mongering, worst-case scenario,” and that, while the predictions it contained might be wrong, they would not be “orders of magnitude wrong.” It was also catastrophically incorrect.

The crux of Dr. Specht’s 35-tweet thread was that the rapid doubling of COVID-19 cases would lead to about 1 million cases by May 5, 4 million by May 11, and so forth. Under this scenario, with a 10% hospitalization rate, we would expect approximately 400,000 hospitalizations by mid-May, which would more than overwhelm the estimated 330,000 available hospital beds in the country. This would combine with a lack of protective equipment for health care workers and lead to them “dropping from the workforce for weeks at a time,” to shortages of saline drips and so forth. Half the world would be infected by the summer, and we were implicitly advised to buy dry goods and to prepare not to leave the house.

Interestingly, this thread was wrong not because we managed to bend the curve and stave off the apocalypse; for starters, Dr. Specht described the cancellation of large events and workplace closures as something that would shift things by only days or weeks.

Instead, this thread was wrong because it dramatically understated our knowledge of the way the virus worked; it fell prey to the problem, common among experts, of failing to address adequately the uncertainty surrounding its point estimates. It did so in two opposing ways. First, it dramatically understated the rate of spread. If serological tests are to be remotely believed, we likely hit the apocalyptic milestone of 2 million cases quite some time ago. Not in the United States, mind you, but in New York City, where 20% of residents showed positive COVID-19 antibodies on April 23. Fourteen percent of state residents showed antibodies, suggesting 2.5 million cases in the Empire State alone; since antibodies take a while to develop, this was likely the state of affairs in mid-April or earlier.

But in addition to being wrong about the rate of spread, the thread was also very wrong about the rate of hospitalization. While New York City found its hospital system stretched, it avoided catastrophic failure, despite having within its borders the entire number of cases predicted for the country as a whole, a month earlier than predicted. Other areas of the United States found themselves with empty hospital beds and unused emergency capacity.

One would think that, given the amount of attention this was given in mainstream sources, there would be some sort of revisiting of the prediction. Of course, nothing of the sort occurred.

This thread has been absolutely memory-holed, along with countless other threads and Medium articles from February and March. We might forgive such forays on sites like Twitter and Medium, but feeding frenzies from mainstream sources are also passed over without the media ever revisiting to see how things turned out.

Consider Florida. Gov. Ron DeSantis was castigated for failing to close the beaches during spring break, and critics suggested that the state might be the next New York. I’ve written about this at length elsewhere, but Florida’s new cases peaked in early April, at which point it was a middling state in terms of infections per capita. The virus hasn’t gone away, of course, but the five-day rolling average of daily cases in Florida is roughly where it was in late March, notwithstanding the fact that testing has increased substantially. Taking increased testing into account, the positive test rate has gradually declined since late March as well, falling from a peak of 11.8% on April 1 to a low of 3.6% on May 12.

Notwithstanding this, the Washington Post continues to press stories of public health officials begging state officials to close beaches (a more interesting angle at this point might be why these health officials were so wrong), while the New York Times noted a few days ago (misleadingly, and grossly so) that “Florida had a huge spike in cases around Miami after spring break revelry,” without providing the crucial context that the caseload mimicked increases in other states that did not play host to spring break. Again, perhaps the real story is that spring breakers passed COVID-19 among themselves and seeded it when they got home. I am sure some of this occurred, but it seems exceedingly unlikely that they would have spread it widely among themselves and not also spread it widely to bartenders, wait staff, hotel staff, and the like in Florida.

Florida was also one of the first states to experiment with reopening. Duval County (Jacksonville) reopened its beaches on April 19 to much national skepticism. Yet daily cases are lower today than they were they day that it reopened; there was a recent spike in cases associated with increased testing, but it is now receding.

Or consider Georgia, which one prominent national magazine claimed was engaging in “human sacrifice” by reopening. Yet, after nearly a month, a five-day average of Georgia’s daily cases looks like this:

What about Wisconsin, which was heavily criticized for holding in-person voting? It has had an increased caseload, but that is largely due to increased testing (up almost six-fold since early April) and an idiosyncratic outbreak in its meatpacking plants. The latter is tragic, but it is not related to the election; in fact, a Milwaukee Journal-Sentinel investigation failed to link any cases to the election; this has largely been ignored outside of conservative media sites such as National Review.

We could go on – after being panned for refusing to issue a stay-at-home order, South Dakota indeed suffered an outbreak (once again, in its meatpacking plants), but deaths there have consistently averaged less than three per day, to little fanfare – but the point is made. Some “feeding frenzies” have panned out, but many have failed to do so; rather than acknowledging this failure, the press typically moves on.

This is an unwelcome development, for a few reasons. First, not everyone follows this pandemic closely, and so a failure to follow up on how feeding frenzies end up means that many people likely don’t update their views as often as they should. You’d probably be forgiven if you suspected hundreds of cases and deaths followed the Wisconsin election.

Second, we obviously need to get policy right here, and to be sure, reporting bad news is important for producing informed public opinion. But reporting good news is equally as important. Third, there are dangers to forecasting with incredible certitude, especially with a virus that was detected less than six months ago. There really is a lot we still don’t know, and people should be reminded of this. Finally, among people who do remember things like this, a failure to acknowledge errors foments cynicism and further distrust of experts.

The damage done to this trust is dangerous, for at this time we desperately need quality expert opinions and news reporting that we can rely upon.

Addendum:  Tilak Doshi makes the comparison to climate crisis claims Coronavirus And Climate Change: A Tale Of Two Hysterias writing at Forbes.  Excerpts in italics with my bolds.

It did not take long after the onset of the global pandemic for people to observe the many parallels between the covid-19 pandemic and climate change. An invisible novel virus of the SARS family now represents an existential threat to humanity. As does CO2, a colourless trace gas constituting 0.04% of the atmosphere which allegedly serves as the control knob of climate change. Lockdowns are to the pandemic what decarbonization is to climate change. Indeed, lockdowns and decarbonization share much in common, from tourism and international travel to shopping and having a good time. It would seem that Greta Thunberg’s dreams have come true, and perhaps that is why CNN announced on Wednesday that it is featuring her on a coronavirus town-hall panel alongside health experts.

But, beyond being a soundbite and means of obtaining political cover, ‘following the science’ is neither straightforward nor consensual. The diversity of scientific views on covid-19 became quickly apparent in the dramatic flip-flop of the UK government. In the early stages of the spread in infection, Boris Johnson spoke of “herd immunity”, protecting the vulnerable and common sense (à la Sweden’s leading epidemiologist Professor Johan Giesecke) and rejected banning mass gatherings or imposing social distancing rules. Then, an unpublished bombshell March 16th report by Professor Neil Ferguson of Imperial College, London, warned of 510,000 deaths in the country if the country did not immediately adopt a suppression strategy. On March 23, the UK government reversed course and imposed one of Europe’s strictest lockdowns. For the US, the professor had predicted 2.2 million deaths absent similar government controls, and here too, Ferguson’s alarmism moved the federal government into lockdown mode.

Unlike climate change models that predict outcomes over a period of decades, however, it takes only days and weeks for epidemiological model forecasts to be falsified by data. Thus, by March 25th, Ferguson’s predicted half a million fatalities in the UK was adjusted downward to “unlikely to exceed 20,000”, a reduction by a factor of 25. This drastic reduction was credited to the UK’s lockdown which, however, was imposed only 2 days previously, before any social distancing measures could possibly have had enough time to work.

For those engaged in the fraught debates over climate change over the past few decades, the use of alarmist models to guide policy has been a familiar point of contention. Much as Ferguson’s model drove governments to impose Covid-19 lockdowns affecting nearly 3 billion people on the planet, Professor Michael Mann’s “hockey stick” model was used by the IPCC, mass media and politicians to push the man-made global warming (now called climate change) hysteria over the past two decades.

As politicians abdicate policy formulation to opaque expertise in highly specialized fields such as epidemiology or climate science, a process of groupthink emerges as scientists generate ‘significant’ results which reinforce confirmation bias, affirm the “scientific consensus” and marginalize sceptics.

Rather than allocating resources and efforts towards protecting the vulnerable old and infirm while allowing the rest of the population to carry on with their livelihoods with individuals taking responsibility for safe socializing, most governments have opted to experiment with top-down economy-crushing lockdowns. And rather than mitigating real environmental threats such as the use of traditional biomass for cooking indoors that is a major cause of mortality in the developing world or the trade in wild animals, the climate change establishment advocates decarbonisation (read de-industrialization) to save us from extreme scenarios of global warming.

Taking the wheels off of entire economies on the basis of wildly exaggerated models is not the way to go.

Footnote: Mark Hemingway sees how commonplace is the problem of uncorrected media falsity in his article When Did the Media Stop Running Corrections? Excerpts in italics with my bolds.

Vanity Fair quickly recast Sherman’s story without acknowledging its error: “This post has been updated to include a denial from Blackstone, and to reflect comments received after publication by Charles P. Herring, president of Herring Networks, OANN’s parent company.” In sum, Sherman based his piece on a premise that was wrong, and Vanity Fair merely acted as if all the story needed was a minor update.

Such post-publication “stealth editing” has become the norm. Last month, The New York Times published a story on the allegation that Joe Biden sexually assaulted a former Senate aide. After publication, the Times deleted the second half of this sentence: “The Times found no pattern of sexual misconduct by Mr. Biden, beyond the hugs, kisses and touching that women previously said made them uncomfortable.”

In an interview with Times media columnist Ben Smith, Times’ Executive Editor Dean Baquet admitted the sentence was altered at the request of Biden’s presidential campaign. However, if you go to the Times’ original story on the Biden allegations, there’s no note saying how the story was specifically altered or why.

It’s also impossible not to note how this failure to issue proper corrections and penchant for stealth editing goes hand-in-hand with the media’s ideological preferences.

In the end the media’s refusal to run corrections is a damnable practice for reasons that have nothing to do with Christianity. In an era when large majorities of the public routinely tell pollsters they don’t trust the media, you don’t have to be a Bible-thumper to see that admitting your mistakes promptly, being transparent about trying to correct them, and when appropriate, apologizing and asking for forgiveness – are good secular, professional ethics.

 

 

 

On Following the Science

H/T to Luboc Motl for posting at his blog Deborah Cohen, BBC, and models vs theories  Excerpts in italics with my bolds

Dr Deborah Cohen is an award-winning health journalist who has a doctor degree – which actually seems to be related to medical sciences – and who is working for the BBC Newsnight now. I think that the 13-minute-long segment above is an excellent piece of journalism.

It seems to me that she primarily sees that the “models” predicting half a million of dead Britons have spectacularly failed and it is something that an honest health journalist simply must be interested in. And she seems to be an achieved and award-winning journalist. Second, she seems to see through some of the “more internal” defects of bad medical (and not only medical) science. Her PhD almost certainly helps in that. Someone whose background is purely in humanities or the PR-or-communication gibberish simply shouldn’t be expected to be on par with a real PhD.

So she has talked to the folks at the “Oxford evidence-based medicine” institute and others who understand the defect of the “computer models” as the basis of science or policymaking. Unsurprisingly, she is more or less led to the conclusion that the lockdown (in the U.K.) was a mistake.

If your equation – or computer model – assumes that 5% of those who contract the virus die (i.e. the probability is 5% that they die in a week if they get the virus), then your predicted fatality count may be inflated by a factor of 25 assuming that the case fatality rate is 0.2% – and it is something comparable to that. It should be a common sense that if someone makes a factor-of-25 error in the choice of this parameter, his predictions may be wrong by a factor-of-25, too. It doesn’t matter if the computer program looks like SimCity with 66.666 million Britons represented by a piece of a giant RAM memory of a supercomputer. This brute force obviously cannot compensate for a fundamental ignorance or error in your choice of the fatality rate.

I would think that most 3-year-old kids get this simple point and maybe this opinion is right. Nevertheless, most adults seem to be completely braindead today and they don’t get this point. When they are told that something was calculated by a computer, they worship the predictions. They don’t ask “whether the program was based on a realistic or scientifically supported theory”. Just the brute power of the pile of silicon seems to amaze them.

So we always agreed e.g. with Richard Lindzen that an important part of the degeneration of the climate science was the drift away from the proper “theory” to “modeling”. A scientist may be more leaning towards doing experiments and finding facts and measuring parameters with her hands (and much of the experimental climate science remained OK, after all, Spencer and Christy are still measuring the temperature by satellites etc.); and a theorist for whom the brain is (even) more important than for the experimenter. Experimenters sort of continued to do their work. However, it’s mainly the “theorists” who hopelessly degenerated in the climate science, under the influence of toxic ideology, politics, and corruption.

The real problem is that proper theorists – those who actually understand the science – can solve basic equations on the top of their heads, and are aware of all the intricacies in the process of finding the right equations, equivalence and unequivalence of equations, universal behavior, statistical effects etc. – were replaced by “modelers” i.e. people who don’t really have a clue about science, who write a computer-game-like code, worship their silicon, and mindlessly promote what comes out of this computer game. It is a catastrophe for the field – and the same was obviously happening to “theoretical epidemiology”, too.

“Models” and “good theory” aren’t just orthogonal. The culture of “models” is actively antiscientific because it comes with the encouragement to mindlessly trust in what happens in computer games. This isn’t just “different and independent from” the genuine scientific method. It just directly contradicts the scientific method. In science, you just can’t ever mindlessly trust something just because expensive hardware was used or a high number of operations was made by the CPU. These things are really negative for the trustworthiness and expected accuracy of the science, not positive. In science, you want to make things as simple as possible (because the proliferation of moving parts increases the probability of glitches) but not simpler; and you want to solve a maximum fraction of the issues analytically, not numerically or by a “simulation”.

Science is a systematic framework to figure out which statements about Nature are correct and which are incorrect.

And according to quantum mechanics, the truth values of propositions must be probabilistic. Quantum mechanics only predicts the “similarity [of propositions] to the truth” which is the translation of the Czech word for probability (pravděpodobnost).

It is the truth values (or probabilities) that matter in science – the separation of statements to right and wrong ones (or likely and unlikely ones). Again, I think that I am saying something totally elementary, something that I understood before I was 3 and so did many of you. But it seems obvious that the people who need to ask whether Leo’s or Stephen’s pictures are “theories of everything” must totally misunderstand even this basic point – that science is about the truth, not just representation of objects.

See also: The Deadly Lockdowns and Covid19 Linked to Affluence

Footnote:  Babylon Bee Has Some Fun with this Topic.

‘The Science On Climate Change Is Settled,’ Says Man Who Does Not Believe The Settled Science On Gender, Unborn Babies, Economics

PORTLAND, OR—Local man Trevor J. Gavyn pleaded with his conservative coworker to “believe the science on climate change,” though he himself does not believe the science on the number of genders there are, the fact that unborn babies are fully human, and that socialism has failed every time it has been tried.

“It’s just like, the science is settled, man,” he said in between puffs on his vape. “We just need to believe the scientists and listen to the experts here.”

“Facts don’t care about your feelings on the climate, bro,” he added, though he ignores the fact that there are only two biological genders. He also hand-waves away the science that an unborn baby is 100% biologically human the moment it is conceived and believes economics is a “conservative hoax foisted on us by the Illuminati and Ronald Reagan.”

“That whole thing is, like, a big conspiracy, man,” he said.

The conservative coworker, for his part, said he will trust the science on gender, unborn babies, and economics while simply offering “thoughts and prayers” for the climate.

Jimbob Does Coronavirus

Humor is important as a means of poking holes in narratives that assert beliefs contrary to reality. Jimbob has become a force skewering notions of climate change, as well as other distorted ideas comprising the “woke” PC canon. Those inside the believer bubble will not be affected, but the important audience are those ignorant or agnostic about the so called “progressive, post-modern agenda.”

A previous post Best Cartoons Madebyjimbob provided an introduction to this artist, along with his point of view.  This post presents his more recent images related to present pandemic foibles.

Another Way Carbon Makes Life Better

 

With wall thicknesses of about 160 nanometers, a closed-cell, plate-based nanolattice structure designed by researchers at UCI and other institutions is the first experimental verification that such arrangements reach the theorized limits of strength and stiffness in porous materials. Credit: Cameron Crook and Jens Bauer / UCI

Brian Bell, University of California, Irvine, writes at phys.org announcing a new way that carbon will serve humanity in years to come.  It’s another example of scientific progress making human life better. The article is Team designs carbon nanostructure stronger than diamonds. Excerpts in italics with my bolds.

Researchers at the University of California, Irvine and other institutions have architecturally designed plate-nanolattices—nanometer-sized carbon structures—that are stronger than diamonds as a ratio of strength to density.

In a recent study in Nature Communications, the scientists report success in conceptualizing and fabricating the material, which consists of closely connected, closed-cell plates instead of the cylindrical trusses common in such structures over the past few decades.

“Previous beam-based designs, while of great interest, had not been so efficient in terms of mechanical properties,” said corresponding author Jens Bauer, a UCI researcher in mechanical & aerospace engineering. “This new class of plate-nanolattices that we’ve created is dramatically stronger and stiffer than the best beam-nanolattices.”

According to the paper, the team’s design has been shown to improve on the average performance of cylindrical beam-based architectures by up to 639 percent in strength and 522 percent in rigidity.

Members of the architected materials laboratory of Lorenzo Valdevit, UCI professor of materials science & engineering as well as mechanical & aerospace engineering, verified their findings using a scanning electron microscope and other technologies provided by the Irvine Materials Research Institute.

Bauer said the team’s achievement rests on a complex 3-D laser printing process called two-photon lithography direct laser writing. As an ultraviolet-light-sensitive resin is added layer by layer, the material becomes a solid polymer at points where two photons meet. The technique is able to render repeating cells that become plates with faces as thin as 160 nanometers.

One of the group’s innovations was to include tiny holes in the plates that could be used to remove excess resin from the finished material. As a final step, the lattices go through pyrolysis, in which they’re heated to 900 degrees Celsius in a vacuum for one hour. According to Bauer, the end result is a cube-shaped lattice of glassy carbon that has the highest strength scientists ever thought possible for such a porous material.

Nanolattices hold great promise for structural engineers, particularly in aerospace, because it’s hoped that their combination of strength and low mass density will greatly enhance aircraft and spacecraft performance.

Other co-authors on the study were Anna Guell Izard, a UCI graduate student in mechanical & aerospace engineering, and researchers from UC Santa Barbara and Germany’s Martin Luther University of Halle-Wittenberg. The project was funded by the Office of Naval Research and the German Research Foundation.

Footnote: This material adds to the many ways our lives are already enriched by carbon-based materials.

Don’t Confuse The Virus and the Disease

Over several decades since 1981 we have learned to distinguish between one virus and the disease it can cause:

HIV: Human Immunodeficiency Virus, and
AIDS: Acquired ImmunoDeficiency Syndrome.

And of course over time scientists have identfied two main virus strains:
HIV-1 is more virulent, easily transmitted and is the cause of the vast majority of HIV infections globally.
HIV-2 is less transmittable and is largely confined to West Africa.

In the rush to inform people during this current pandemic, the terminology for public consumption has glossed over important distinctions between coronavirus, the Wuhan novel virus and the disease fatal to some people.

Some technical terminology from WHO: Naming the coronavirus disease (COVID-19) and the virus that causes it.

Coronaviruses

First characterized in the 1960s, these are a group of related viruses that cause diseases in mammals and birds. In humans, coronaviruses cause respiratory tract infections that can be mild, such as some cases of the common cold (among other possible causes, predominantly rhinoviruses), and others that can be lethal, such as SARS and MERS.

Novel coronavirus originating in Wuhan, China.

SARS-CoV-2 (Severe Acute Respiratory Syndrome CoronaVirus 2)

WHO’s International Committee on Taxonomy of Viruses (ICTV)announced “severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)” as the name of the new virus on 11 February 2020. This name was chosen because the virus is genetically related to the coronavirus responsible for the SARS outbreak of 2003. While related, the two viruses are different.

2019 Coronavirus Disease Pandemic

COVID-19 (COronaVIrus Disease 2019)

WHO announced “COVID-19” as the name of this new disease on 11 February 2020, following guidelines previously developed with the World Organisation for Animal Health (OIE) and the Food and Agriculture Organization of the United Nations (FAO)

A Timeline of Historical Pandemics
(link goes to visualization by Flourish team)

Technical accuracy with these terms is important to understand testing and reports of the pandemic progress. A helpful guide is published in Scientific American today Here’s How Coronavirus Tests Work—and Who Offers Them. Excerpts in italics with my bolds.

PCR-based tests are being rolled out in hospitals nationwide, and the Food and Drug Administration is fast-tracking novel approaches as well

Virus Testing

The first step in any coronavirus test is collecting a sample. Doing so involves placing a sterile swab at the back of a patient’s nasal passage, where it connects to the throat via the nasopharynx, for several seconds to absorb secretions. Scott Wesley Long, a clinical microbiologist who directs Houston Methodist Hospital’s diagnostic microbiology lab, says the swab is thin—less than three millimeters in diameter at its tip. “Once you place it in the back of the throat, it’s uncomfortable, but you can still breathe and talk,” he says. “It’s not as bad as it looks.” After a sample is collected, the swab goes into a liquid-filled tube for transport.

To determine whether a nasopharyngeal sample is positive for the coronavirus, biotechnicians use a technique known as reverse transcriptase polymerase chain reaction, or RT-PCR. The World Health Organization’s and CDC’s test kits both use this method, as do all of the kits the latter has approved to date. [This detects signs of the virus’s genetic material.]

Stephanie Caccomo, a spokesperson for the FDA, says the positive predictive value, or likelihood a positive test result correctly reflects that a patient has COVID-19, depends on how widespread the disease is—and that situation is changing quickly. “Based on what is known about the pathophysiology of COVID-19, the data provided and our previous experience with respiratory pathogen tests, the false-positive rate for authorized tests is likely to be very low, and the true-positive rate is likely to be high,” Caccomo says.

Person loads a Mesa Biotech cartridge into a dock for testing. Credit: Mesa Biotech

On Saturday Cepheid, a Silicon Valley–based molecular diagnostics company, said the FDA had granted it authorization for a COVID-19 test that can deliver results in about 45 minutes. And on Tuesday Mesa Biotech in San Diego announced it had received the go-ahead for a handheld test kit that Hong Cai, the company’s CEO, says can deliver results at bedside in about half an hour. Cai says the tests will begin shipping this week to “several hospitals” and that her company has tens of thousands of units ready to go, adding that Mesa is planning to triple its production capacity.

Antibody Testing

Another approach relies on identifying antibodies to the coronavirus (SARS-CoV-2) in a patient’s bloodstream to determine whether that person previously had COVID-19. Florian Krammer, a microbiologist at the Icahn School of Medicine at Mount Sinai, recently developed one of these tests, which is described in a preprint study posted last week on medRxiv. “This is not a test for [ongoing] infections,” he says. “It basically looks for antibodies after the fact, after you had an infection.” Like other serological, or antibody-based, diagnostic assays, it uses an enzyme-linked immunosorbent assay (ELISA), which employs a portion of the target virus to find antibodies. Although serological tests are not useful for quickly identifying whether a patient currently has COVID-19, Krammer says they can help researchers understand how humans produce antibodies to the virus.

Additionally, serological tests can also help determine if a person has been infected whether or not the individual had symptoms—something an RNA test kit cannot do after the fact, because it only looks for the virus itself. That means serological tests could be used to survey a population to determine how widespread infection rates were. It also could allow public health agencies to figure out who is already immune to COVID-19. “So if you would roll this out on a very wide scale, you could potentially identify everybody who is immune and then ask them to go back to their regular life and go back to work,” Krammer says. This approach could be especially useful for health care providers who are working with COVID-19 patients. “They might feel much more comfortable working with those patients, [knowing] that they can’t get sick anymore, knowing that they can’t pass on the virus to others,” he says.

Comment:

In common discourse, we talk about “disease” or “illness”, refering to how we feel, that is our awareness of symptoms.  In fact, the entry of a virus (or other pathogen such as bacteria, fungi or parasite) sets up a battle with our immune system even before we know it.  When the virus is defeated quickly, we have mild or no symptoms, and at least in the case of seasonal flus, we can be immune to further infection.  In some cases, people weakened by fighting other pathogens will need hospital help and may not survive.  The subtle point is that presence of the virus and the state of the disease are two different things.

This video is helpful in getting the basics right (published March 9, 2020)

See also: Progress on Covid19 Antibodies

Fight Coronavirus with Global Warming

An important study of our experience with the covid19 pandemic shows that warmer, more humid weather works against transmission of the disease.  The paper is High Temperature and High Humidity Reduce the Transmission of COVID-19 by Jingyuan Wang, Ke Tang, Kai Feng and Weifeng Lv. Excerpts in italics with my bolds.

Abstract: This paper investigates how air temperature and humidity influence the transmission of COVID-19. After estimating the serial interval of COVID-19 from 105 pairs of the virus carrier and the infected, we calculate the daily effective reproductive number, R, for each of all 100 Chinese cities with more than 40 cases. Using the daily R values from January 21 to 23, 2020 as proxies of non-intervened transmission intensity, we find, under a linear regression framework for 100 Chinese cities, high temperature and high relative humidity significantly reduce the transmission of COVID-19, respectively, even after controlling for population density and GDP per capita of cities. One degree Celsius increase in temperature and one percent increase in relative humidity lower R by 0.0383 and 0.0224, respectively. This result is consistent with the fact that the high temperature and high humidity significantly reduce the transmission of influenza. It indicates that the arrival of summer and rainy season in the northern hemisphere can effectively reduce the transmission of the COVID-19.

Discussion: Rough observations of outbreaks of COVID-19 outside China show a noteworthy phenomenon. In the early dates of the outbreak, countries with relatively lower air temperature and lower humidity (e.g. Korea, Japan and Iran) see severe outbreaks than warmer and more humid countries (e.g. Singapore, Malaysia and Thailand) do. Considering the natural log of the average number of cases per day from February 8 to 29 as a rough measure of the severity of the COVID-19 outbreaks3 , in Figure 1, we show that the severity is negatively related to temperature and relative humidity using 14 countries with more than 20 new cases during this period.

Figure 1: Severity of COVID-19 outbreaks v.s. temperature and relative humidity for countries outside China.

Inside China, the COVID-19 has spread widely to many cities, and the intensity of transmission and weather conditions in these cities vary largely (shown in Table SI 1), we can, therefore, analyze the determinants of COVID-19 transmission, especially the weather factors. In order to formally quantify the transmission of COVID-19, we first fit 105 samples of serial intervals with the Weibull distribution (a distribution commonly used to fit the serial interval of influenza[8]), then calculate the effective reproductive number, R, a quantity measuring the severity of infectiousness[9] , for each of all 100 Chinese cities with more than 40 cases.

Figure 3: Effective reproductive number R v.s. temperature and relative humidity for 100 Chinese cities

Figure 2 shows the average R values from January 21 to 23 for different Chinese cities geographically. Compared with the southeast coast of China, cities in the northern area of China show relatively larger R values and lower temperatures and relative humidity. The scatter plots in Figure 3 illustrate two negative relations between the daily air temperature and R value and between the daily relative humidity and R value, respectively.

Our finding is consistent with the evidence that high temperature and high humidity reduce the transmission of influenza[10-14] , which can be explained by two possible reasons: First, the influenza virus is more stable in cold temperature, and respiratory droplets, as containers of viruses, remain airborne longer in dry air[15, 16] . Second, cold and dry weather can also weaken the hosts’ immunity and make them more susceptible to the virus[17, 18] . These mechanisms are also likely to apply to the COVID-19 transmission. Our result is also consistent with the evidence that high temperature and high relative humidity reduce the viability of SARS coronavirus[19,20] .

If omitting control variables, 7 the fixed-effects model of Table 2 provides an estimation of the R value for a certain city given its temperature and relative humidity:Assuming that the same relationship of Equation (1) applies to cities outside China and that the temperature and relative humid of 2020 are the same as those in 2019, we can draw a map of R values for worldwide cities in Figure 4 by plugging the average March and July temperatures and relative humidity of 2019 into Equation (1). This figure cautions people of the risk of COVID-19 outbreak worldwide, in March and July of 2020, respectively. As expected, the R values are larger for temperate countries and smaller for tropical countries in March. In July, the arrival of summer and rainy season in the northern hemisphere can effectively reduce the transmission of the COVID-19; however, risks remain in some countries in the southern hemisphere (e.g. Australia and South Africa). If we plug the normal summer temperature and relative humidity of Tokyo (28oC and 85%, respectively) into Equation (1), the transmission of the COVID19 in Tokyo will be seriously reduced between March and the Olympics: the estimated R value decreases from 1.914 to 0.992, a 48% drop!

Postscript:  Some Context on US Situation from Conrad Black

The United States is now outdone only by Germany and Canada, among countries with sophisticated public-health systems that publish believable numbers, in the small proportion of reported cases who die from the coronavirus. This is 674 people out of 51,542 cases reported, as of late afternoon Tuesday, or 1.25 percent of identified cases, and if those who are immune-challenged are removed from that figure, the percentage descends to less than half of 1 percent of the identified cases. Even though most of the people tested appeared to have possible coronavirus symptoms, only a little more than 15 percent of those tested have tested positively. Because the United States is ramping up its treatment capabilities so quickly, it has an inordinate number of the world’s reported cases, 23 percent of the world’s new cases reported on Monday, though it only has about 4 percent of the world’s population, but the world fatality rate is about 4 percent, more than three times the American rate. The disease is still spreading unavoidably, but if care is taken to insulate the elderly and infirm from contact, the mortality rate descends to a point not greatly above seasonal flu fatality numbers.

Though it is hard to be precise about it, less than 1 percent of the adult population of the U.S. have apparently reported coronavirus-like symptoms; of those, about 20 percent have been tested; of those, about a quarter have tested positive; and of those, apart from clearly vulnerable people, fewer than half of 1 percent have died. In epidemiological terms, this is a very serious penetration of the population by a very nasty virus, but it does not justify continuing the extreme restrictions on the economic life of the country, and specifically this lethal threat to the economic well-being of tens of millions of Americans.