Follow-up: A few observations around COVID-19
My previous text on COVID-19, while broadly correct, might need some cautions in detail:
- Recently, I have repeatedly heard the claim that the daily increase in cases is mostly a question of an increase in the number of tested individuals, with the proportion of infected changing very little within this group. If so, this could make the numbers and trends I looked at extremely misleading, overstating the current spreading and relative death-toll radically, e.g. in that the virus has mostly done its work already, while we are now only finding out about the results. (Note the parallel to the hypothesis that death rates and whatnots are mostly those of a yearly flu and that this year is unique in that someone is looking into the causes of death—not in that there would be a significant new driving cause.)
On the other hand, assuming the testing-drives-numbers hypothesis to be true, it is possible that the reason that more are tested is more people being infected, feeling symptoms, going to a physician to have the symptoms checked, and being tested because they went to a physician. Then the numbers could still reflect reality.
What-is-what will depend on the circumstances of testing, which are not within my knowledge and might vary from area to area or from time to time. Even so, this is yet another reason to keep a cool head, yet another reason why the situation is* or could** be less dangerous than the numbers might seem to imply.
*Examples include that the number of deaths must not be seen in isolation but be compared to other causes of death and that the deaths largely hit those already in poor health.
**Examples include potential over-reporting of deaths through mixing “killed by COVID-19” and “died while having COVID-19” in the statistics and unclear/inconsistent use of measures like “case fatality rate”.
- From the previous item, it is quite possible that the number of infected is far higher than the “less than 1-in-1000 Germans” that I used as a basis. If so, the relative risk that my own issues are COVID-19 (and not a cold/flu/whatnot) could sky-rocket. However, this does not change the overall reasoning, because as the risk of having COVID-19 would rise, so would the risk of non-trivial complications, given that I had COVID-19, sink.
At an extreme, some have hypothesized that almost everyone would already be infected (most of them either asymptomatic or already cured).* If so, I would probably be among them, but the danger, based on e.g. proportion of infected who died, would drop to next to nothing and I would have no greater need to worry than before.
*This hypothesis I encountered for the first time much further back, but I kept it out of the discussion, because it is on the fringes of the spectrum, is more far-fetched, and, per this item, does not really matter in this particular context. The complication in the previous item still does not matter, but is less far-fetched and more mainstream. (More generally, there are a great many claims, arguments, speculations, etc. that I have left out of my own discussions.)
- Remark on notation: Below I will use the “^” sign to denote exponentiation (e.g. 2^3 = 2 * 2 * 2). Beware that instances of “*” (for multiplication) might or might not look odd, for technical reasons: I normally use this sign only for footnotes with my current markup, and I have no provisions in place to differ between footnote use and multiplication use typographically. Proof-reading, I also note that “t” (“tee”, used for time below) and “f” (“eff”, used for functions below) look extremely similar in my own browser, possibly, because of an unfortunate default font.
While I maintain that exponential models become naive very fast, there is a complication of quasi-exponentiality that I overlooked in my last discussion. This also affects the relevance of the greater-than-linear-but-smaller-than-exponential growth mentioned in the German data. (Assuming that the data is usable in the first place. Cf. the first item above.)
Exponential growth amounts to the increase at time t being proportional to the value at time t.* A typical case of this is a population where each member of the population contributes identically** to the growth, e.g. when each infected infects the same number of new people in each “iteration”.
*More formally, e.g. by a differential equation like df/dt = k * f or, in a discrete analog or approximation, f(n + 1) = f(n) + k * f(n).
**This is a common, simplifying, assumption when making models, in the hope that the variations “average out”. I suspect that it is naive more often than not.
For instance, assume that this number is two and that we start with a population of one at t = 0. At t = 1, we have the original plus the two people infected by him for a total of three. At t = 2, we have the three plus the 2 * 3 = 6 people they infected, or 9 overall, etc. We then have f(t) = 3^t. (Where t is time in some context dependent unit, e.g. hours, days, or years. Below, I will silently assume days, but note that the numbers used are not necessarily realistic. For instance, a tripling of the infected every day would be horrifyingly large.)
A slightly less naive* model might assume that the increase from t to t + 1 is affected only by the new cases from time t – 1 to t. Running through the same scenario, we still have f(0) = 1 and f(1) = 3, as the sum of the first person and the two he infected. However, at t = 2, we have the previous three and the 2 * 2 = 4 people infected by those infected in the previous iteration, for a total of 7 (not 9), At t = 3, we have the previous 7 plus the 2 * 4 = 8 newly infected, for a total of 15, etc. This amounts to the series 1 + 2 + 4 + … 2^t. By a high-school formula, this sums to (2^(t + 1) – 1) / (2 – 1) = 2^(t + 1) – 1. This is technically not an exponential function; however, 2^(t + 1) is, and the difference of – 1 rarely matters for a large t.**
*I am not familiar with the models actually used, but a more sophisticated model might work with e.g. variable probabilities for different generations of the infection, where someone in the latest generation has a greater probability of infecting others than those in the second latest, the second latest one greater than the third latest, etc. Different probabilities might apply to e.g. office workers, school children, and house wives. (This classification would have to remain crude, or the complexity would explode with little benefit.) Different probabilities could apply in different areas depending on how long the infection has been present locally. Some mechanism should be in place to consider recoveries and deaths. Etc.
**For instance, t = 10 gives 2048 in one case and 2047 in the other. But, as a comparison, our original 3^t results in the much larger 59049.
More generally, there are many “sub-exponential” functions that are still bounded from below by an exponential function and/or might be near indistinguishable from one for large input values. These would then show a less than exponential growth in each iteration of e.g. a model of infections, but would be as bad as an exponential function in the long run. (If often a smaller exponential function than if the growth had not been sub-exponential.) One example is a combination of a larger pre-infected and non-infectious sub-population with an (originally) smaller and highly infectious sub-population, e.g. for f(t) = 10000 + 2^t. For small t, the effects might seem like trivial measurement noise or an extremely slow-moving infection, e.g. in that f(5) – f(4) = 16, a small fraction of the overall; however, a little later, we have e.g. f(20) – f(19) = 524288, which almost doubles the overall at t = 19 and dwarfs the overall at t = 0 (and e.g. t = 5).
Then again, an exponential function is not automatically a problem, if the growth rate is sufficiently small relative the length of time. For instance, if we know that a cure will be widely available within three months then 1.1^t is much better than (the merely polynomial) 1 + t^2, no matter how much worse it would be after four months.
On a more personal note: By the time of my last text, I had managed to get hold of toilet paper (cf. an earlier text), but I might have failed again, had I arrived ten minutes later—so fast was the product moving. (And not necessarily even through hoarding at this juncture: everyone seemed to be taking one package, implying that it was likely mostly just others who were in a deficit.) However, the scarcity seems to continue and quite a few other items are (still) affected, including canned foods and frozen meals. Considering that the supply chain has had weeks to react, this is a disturbing sign for the future and any non-artificial* crisis.**
*While the supply side might have been hit by various restrictions on work and whatnot, the brunt of the problem is likely still through the unwarranted increase in demand. (And even the supply effect could be seen as artificial …) Now consider a similar situation when the supply side has been severely hit, e.g. in a war.
**Whether any blame, e.g. based on poor planing, should be attached to someone, I leave unstated—there are too many factors that I am unaware of, e.g. how large the storage buffers tend to be and to what degree e.g. farm output can be a limiter. I do note the failure to raise prices as per my earlier text, however.
[…] through a recent article on UNZ, and with the topic of exponential growth on my mind through COVID-19, I cannot resist an item on my backlog: how poor politics and politicians harm economic […]
Exponential growth, the economy, and the damage of poor government | Michael Eriksson's Blog
April 28, 2020 at 11:55 am