Useful vs. Useless COVID-19 Models: A Response to the Armchair Analysts

There's been a lot of discussion by armchair analysts about various models being used to predict outcomes of COVID-19.  The armchair analysts I've seen include a philosophy major and a Ph.D. candidate with little experience in statistics, much less in modeling complex systems.  In fact, the discussion coming from academia and its sycophants in the media further demonstrates just how deep the "Deep State" runs.  For those of us who have built statistical models, all of this discussion brings to mind George Box's dictum: "all models are wrong, but some are useful"...or useless, as the case may be.

The problem with data-driven models, especially when data are lacking, can be easily explained.  I'll start with a brief background on statistical analysis (AKA hypothesis testing).  First of all, in terms of helping decision-makers make quality decisions, statistical hypothesis testing and data analysis make for just one tool in a large toolbox, and it's based on what we often call reductionist theory.  In short, the tool examines parts of a system and then makes inferences as to the whole system.

The tool is usually quite good at testing hypotheses under carefully controlled experimental conditions.  For example, the success of the pharmaceutical industry is, in part, due to the fact that it can design and implement controlled experiments in a laboratory.  However, even under controlled experimental procedures, the tool has limitations.  Simple confidence intervals (C.I.) provide good insight into the accuracy (usefulness) of such models.  For the COVID-19 models that we have seen on the so-called "news," the C.I. is often reported as a range of the predicted number of people who will contract or die from the disease (e.g., 60,000 to 2 million).  These C.I.s are quite large and quite useless, at least in terms of helping decision-makers make such dire decisions about our health, economy, and civil liberties.

The armchair analysts' descriptions of these C.I.s show how clueless they are of even the simplest of statistical interpretation.  Assuming they are using a 95% C.I., reductionist tools cannot say there is a 95% chance that the true mean is in the interval. They can only say 95% of similar intervals would contain the mean if constructed from a similar, but different sample.

Thus far, most of the data appear to come from large population centers like N.Y.  This means the data are biased, which makes the entire analysis invalid for making any decisions outside N.Y. or similar geographical areas.  It would be antithetical to the scientific method if such data were used to make decisions in, for example, Wyoming.  While these models can sometimes provide decision-makers with useful information, the actual decisions being made are far too important and complex to be based on such inaccurate data.  There are volumes of scientific literature that explains the limitations of reductionist methods should the reader wish to investigate this further.

Considering the limitations of this tool under controlled laboratory conditions, imagine what happens within more complex systems that encompass large areas, contain millions of people, and vary with time (e.g., seasonal or annual changes).  In fact, when it comes to predicting outcomes within complex and adaptive and dynamic systems, where controlled experiments are not possible and where data are lacking and large amounts of uncertainty exist, the reductionists' tool is not useful.  This is why climate change modeling has little credibility among real scientists and modelers (other than to those pseudo-scientists who feather their nest with taxpayers' money by making up these models).  The many flaws of the approach used by the climate change (and certain COVID-19) modelers are too extensive to be covered in this article, but again, there are volumes of scientific literature that describe these flaws and support what I am saying.  Instead, I will briefly describe a solution to modeling complex systems.

The solution and the type of modeling needed to make quality decisions (not just predict numbers) are what we modelers call participatory scenario modeling.  The key to this method is to use Decision Science tools that explicitly link data with the knowledge and opinions of a diverse mix of subject matter experts (SME).  The method uses a systems approach (not a reductionist one) and seeks to help the decision-maker weigh the available options.  A good scientist should recognize that modeling complex adaptive systems requires a diversity of thought and experience.  Nobody should trust a model of this nature if it's developed by a few people in an ivory tower.  Anybody with any street smarts at all knows that if you go to a surgeon (or invite only surgeons to the table), you already know the answer you will get: they will undoubtedly recommend surgery.  Again, the key is participation from a diverse set of SMEs from interdisciplinary backgrounds working together to build models that assess the decision options in terms of a probability of the possible outcomes.

For the COVID-19 issue, we likely need a set of models for health and medical and economic decisions that augment final decision support models that help the decision-makers weigh their options.  This can all be accomplished with currently available Decision Science methods.  No experienced decision-maker would (or should) rely on any one model or any one SME (especially if they come from the Deep State) when making complex decisions with so much uncertainty and so much at stake.  There are volumes of scientific literature that show that individual experts are no better than laymen at making quality decision under systems characterized by complexity and uncertainty, unless they use a structured decision-making process supported by tools like participatory scenario modeling.

The problem is that the pseudo-scientists of academia, many so called "non-profits," and the government agencies that fund them often allow only SMEs who drink the Kool-Aid and agree with their agenda to participate.  In short, they often rig the participatory models.  I'm not saying this is occurring with COVID-19, but it is happening with climate change and land use models.  There is a danger of cherry-picking the SMEs.  Let's hope that doesn't happen for the COVID-19 crisis.

Science is a quest for truth, not consensus.  The scientific method, if carried out with honor and integrity, seeks only to estimate and interpret God's truth as best we can so that we may make wise choices for ourselves, our families, our community, and our country.  In fact, the great scientists of the past who created and developed the scientific method 400 years ago — who, by the way, were deeply religious men — required the use of multiple working hypotheses.  To the layman, this simply means we include different perspectives, ideas, and experiences in our quest for the truth.  Some scientists, like me, have even called for making science itself more democratic, which is why I am no longer welcome in the halls of academia.

For those dogmatic Deep-State pseudo-scientists in academia (and their lackeys in the "media" and Congress), I will end with a history lesson from Francis Bacon, the founder of the scientific method.  Bacon himself intended the scientific method ("the interpretation of nature", as he called it) to be not the whole truth, but rather only one part of it:

Let there be therefore (and may it be for the benefit of both) two streams and two dispensations of knowledge, and in like manner two tribes or kindreds of students in philosophy — tribes not hostile or alien to each other, but bound together by mutual services; let there in short be one method for the cultivation, another for the invention, of knowledge.  (Francis Bacon, 1620)

If you experience technical problems, please write to helpdesk@americanthinker.com