I can't wait for bona fide statisticians to really dig into this "research." Some interesting and perhaps very troubling observations from this analysis:
1) This isn't peer reviewed. While peer review isn't perfect, as one will see with my additional points, they would absolutely get shredded if they submitted this research to a bona fide scientific journal.
2) The authors love to rewrite papers to justify their own conclusions. For example:
The authors state on page 15: "Some studies find a significant positive relationship between lockdowns and mortality. This includes Chisadza et al. (2021), who find that stricter lockdowns (higher OxCGRT stringency index) increases COVID-19 mortality by 0.01 deaths/million per stringency point."
What did the study by Chisadza state?
From the abstract: " Using the Oxford COVID-19 Government Response Tracker (OxCGRT) dataset for a global sample of countries between March and September 2020, we find a non-linear association between government response indices and the number of deaths. Less stringent interventions increase the number of deaths, whereas more severe responses to the pandemic can lower fatalities."
From the discussion: "We find that the overall government response index has a non-linear association with the number of deaths—driven by the containment and health interventions—for the aggregated sample of countries. The number of deaths increases with partially relaxed lockdown restrictions, but decreases with severe restrictions."
Uh oh, that doesn't look right...
3) The authors like to cherry pick data. For example, as with #2, the authors didn't like the conclusions of of the Chisadza et al study. Why? Chisadza found their data was giving nonsensical results. In their original linear model, they identified that all government interventions, including health measures (testing policy, contact tracing, public information campaigns, and investments in vaccines and healthcare) increased the risk of death. Think about that. Does that make sense to anyone? Of course not. And Chisadza agreed with that. So they redid their analysis with a nonlinear model and now the data made sense. In fact, they found there was a significant effect of various measures of government triggered closures and shutdowns.
But the economists didn't like these results. They did some handwaving, and used the result that suggested
government triggered closures and shutdowns increase death.
4) The authors purposely normalize the data to reduce the effect of government led interventions. See page 29. The authors explain how they analyzed the stringency data. The stringency score that is at the center of their analysis is a scale of 0-100, with 0 being no shutdowns of anything by the government, and 100 being complete shutdown of everything. Look carefully what they did, as laid out on page 29. For example, they discuss the data from one study (Ashraf et al). During that time period the US had a stringency score of 74. Except the authors do some more hand waving, saying the US should have only had a stringency score of 44, because the US was not following "policy solely based on recommendations." WTF does that mean? They apparently have some magic recommendation of what the US should be doing.
So everything in Table 3 of their write-up is not comparing no shutdowns vs shutdowns with how many lives were saved. They are calculating how many lives were saved based on what the countries actually enacted above some "recommendation" they made up for each country.
To put that another way. Imagine someone was trying to figure out the tallest tree in the world. One way would be to define how far a tree rises above ground. That's objective. But what the authors are doing here is trying to we shouldn't include the trunk in the calculations and should only count how high the branches rise above the end of the trunk because all trees have a trunk. In the end, using this analysis, someone would end up concluding that all trees are shorter than what has been published in the past. It is absurd and non-objective, and purposely distorts the data in their favor.
5) Weight. Look again at Table 3. The big result of Table 3 is their "precision-weighted average." How did they calculate that? Well, its based on weighing each study in how confident they are in their calculation (weight calculation where 1/SE). Look at what study they weigh the highest. Wow, its the Chisadza et al study where the authors already reject the conclusions, purposely use the wrong model, and in the end, use that study as the foundation of their analysis. If they really did think the Chisadza paper is messed up, why even bother including it in their study??? Its because they warp the data to support their own conclusions.
For some of the studies analyzed, it isn't even clear how the authors derived the error calculations. What was that term from 20 years ago? Fuzzy math, right?
So not only are their "precision-weighted average" wrong (see point #4), they purposely based it primarily on a study they think is crap. Lastly, look what the unweighted average/median show. They do show the possibility of a significant effect. Hmmm.
6) This study is an analysis of how many lives could be saved under the conditions enacted by a country. What this analysis doesn't AND cannot demonstrate, is how many more lives would have been saved if countries enacted MORE stringent policies. And given how they normalize the data as described in my point #4, they purposely underestimate the effects of shutdowns.
7) Lastly think about what the authors are trying to say. The argument for shutdowns has been primarily to avoid overloading the healthcare system. The authors purposely reject looking at reasonable measures for the effectiveness of shutdowns, including the number of cases or hospitalizations. More cherry-picking.
I will happily await further analyses that will identify further problems that will expose how shitty this "analysis" is.