In the past 24 hours, there have been approximately 1,800 reported COVID deaths in the United States.
USA COVID-19 stats as of 08:30 AM on May 15, 2020
🦠 Cases 🦠: 1,458,243 (+117.0)
☠️ Deaths ☠️: 86,942 (+5.0)
Updated GU,PR since 08:00 AM on May 15, 2020
— USCovidDeathBot (@USCovidDeathBot) May 15, 2020
Today is May 15th.
The Council of Economic Advisors used an canned Excel tool to project that there would be virtually no deaths today just 10 days ago.
To better visualize observed data, we also continually update a curve-fitting exercise to summarize COVID-19's observed trajectory. Particularly with irregular data, curve fitting can improve data visualization. As shown, IHME's mortality curves have matched the data fairly well. pic.twitter.com/NtJcOdA98R
— CEA (@WhiteHouseCEA) May 5, 2020
Many people and sites, including Balloon-Juice did an immediate spit take of WTF-ery.
For there to have been no deaths in the past twenty four hours would strongly imply that there were no infections after the last couple of days of April. We know that COVID kills comparatively slowly. There is usually a significant gap between infection and testing, and then testing and hospitalization and finally between hospitalization and death.
So besides dunking on the CEA, what can we learn from this?
There are an incredible number of models out there. Some are trying to project infection rates. Others are attempting to predict hospitalization capacity. More are coming online to identify the safest way to re-open up limited physical interaction. Some of these models are going to be good. Some are going to be great at one specific task and useless for everything else, and others are going to be cubic fit dumpster fires.
How do you tell the difference?
The first thing we, as consumers of projections, need to do is know what a particular model is attempting to do. We need to know that a hospital bed projection model should be assessed on hospital bed demand first and foremost. A limited model should not be generalized past its own limits.
Secondly, we need to look into the assumptions that every model makes. Do those assumptions make inherent sense given what we have learned? We know that COVID is a slow killer. We know it is fairly easily spread when people are in prolonged contact in enclosed spaces. We know a lot. Do the assumptions violate what we know? Also do the assumptions make heroic assumptions about political and social behavior that are unlikely to be fulfilled? Does the model assume the US will go to Wuhan or Italian style sheltering in place and hold those orders for the entire summer? That is unlikely (the Italian version of opening up is the equivalent of the stricter US shelter in place orders).
Do the models reflect new information well? The approval of remisdivar is the equivilent of several hundred thousand hospital days avoided which is the equivilent of a few dozen hospitals and attendent workforces the size of Massachusetts General being magically constructed. Do the models adjust as we learn more and new technologies and knowledge change courses of treatment?
Do the models do well in predicting the short term? Tomorrow should be easier to predict than August. Does a model predict the near future well or are the modelers creating a bunch of epicycles to explain their misses? Is uncertainty expressed well? We are still on the OMG learning so much every day part of the knowledge curve right now. There is a tremendous amount of things that we just don’t know. Every model is going to be wrong, but does the model and modelers acknowledge that they are going to be wrong and give you parameters of plausible wrongness? Again, as a worker assumption, tomorrow is easier to project than August so confidence intervals should reflect that.
Finally, pulling back a golden oldie from the dawn of the blogosphere: Daniel Davies One Minute MBA
Fibbers’ forecasts are worthless. Case after miserable case after bloody case we went through, I tell you, all of which had this moral. Not only that people who want a project will tend to make inaccurate projections about the possible outcomes of that project, but about the futility of attempts to “shade” downward a fundamentally dishonest set of predictions. If you have doubts about the integrity of a forecaster, you can’t use their forecasts at all. Not even as a “starting point”. By the way, I would just love to get hold of a few of the quantitative numbers from documents prepared to support the war and give them a quick run through Benford’s Law.
Application to Iraq: This was how I decided that it was worth staking a bit of credibility on the strong claim that absolutely no material WMD capacity would be found, rather than “some” or “some but not enough to justify a war” or even “some derisory but not immaterial capacity, like a few mobile biological weapons labs”. My reasoning was that Powell, Bush, Straw, etc, were clearly making false claims and therefore ought to be discounted completely, and that there were actually very few people who knew a bit about Iraq but were not fatally compromised in this manner who were making the WMD claim. Meanwhile, there were people like Scott Ritter and Andrew Wilkie who, whatever other faults they might or might not have had, did not appear to have told any provable lies on this subject and were therefore not compromised.
Ignore known liars, charlatans and frauds.