Warranted skepticism in science / Follow-up: Who are the science deniers?
Disclaimer: I wrote a near complete draft of the below two weeks ago, but left a few TODOs in, where I, today, am uncertain exactly what I intended. For reasons of time and the size of my backlog, I have mostly removed them without additional work. For the same reasons, I have not split the text into two, one dealing with the intended core topic and one with the unplanned side-topics around 7DSP (cf. below).
I have already written ([1]) about how accusations of e.g. “science denialism” are more appropriately directed at the Left than the non-Left. However, there is another aspect to the issue, where the non-Left, again, fares better, but which might be a source of much of the Leftist propaganda of alleged non-Leftist “science denialism”: warranted skepticism. This usually hand-in-hand with critical thinking and a wish to think for oneself/to form one’s own opinions. (All of which seem to be far rarer on the Left than on the non-Left.)
The simple truth is that even proper science performed by great minds according to all the rules of the scientific method often gets things wrong or finds something else than was expected. Even physics is inherently something fallible and incremental, which does not reveal absolute and unshakable truths after a five-minute experiment (or, for that matter, five minutes of math). That we now might appear to have a great many absolute and unshakable truths in physics is the result of hundreds of years of accumulated results and vetting of results. Even so, there are occasional discoveries, or even paradigm shifts, that show something to be faulty or just approximately correct.
But how much of science can be described as “proper science performed by great minds according to all the rules of the scientific method”? Very little. Most scientists are not great minds—good, maybe, but not great. Many, especially in the softer sciences, do not have a scientific mindset. Many are driven by secondary concerns, e.g. to publish enough for a good career, to further an ideological agenda, or to stay on the good side of a financier. The scientific method* might be more something developed and proposed by philosophers than scientists, and most good scientists are likely content with a scientific mindset—while bad scientists often do not have even that.
*Whether the scientific method is even that important on the level of the individual scientist might be disputed. It does become very important on the level of the field as a whole, however.
It is also quite common for scientists and/or scientific results to contradict* each other, especially as time goes by and especially where health is concerned. To take just one example, I have heard at least the following claims about alcohol** and health: No matter how much or what type of alcohol you drink, drinking less is better. Moderate amounts of any type of alcohol are beneficial. Moderate amounts of specifically wine are beneficial, but not of other types of alcohol. Moderate amounts of specifically red wine are beneficial, but not of other types of alcohol, including other wines.
*Not to be confused with pseudo-contradictions, e.g. the unexpected-but-not-contradictory claims that (a) coffee is good, (b) caffeine is bad. It might, for instance, be that some non-caffeine component of coffee is good for the human body and more than outweighs any negative effects of caffeine. (Whether either claim is true, I leave unstated, especially with an eye on issues like dosages and the risk of someone mistaking correlation for causation.)
**Strictly speaking, ethanol—that we should stay away from e.g. methanol is indisputable. However, the reporting (and e.g. the labels on wine and gin bottles) is always phrased in terms of “alcohol” and I will remain consistent with this.
Being skeptical of what is claimed by even physicists has some justification—being skeptical of what is claimed in the softer sciences is an outright virtue. This, however, is not “science denialism”—it is, on the contrary, a part of that scientific mindset. Those who blindly claim that “scientist X said Y; ergo, Y”, without own thought, without an awareness that scientist X might be wrong (or misunderstood/misreported), without finding out what other scientists have said on the topic, etc. are the ones being unscientific. Moreover, being skeptical of what journalists and politicians claim that scientists would claim is a virtual necessity to an even remotely scientific mind. Someone who is not, should not be allowed to vote.
I have already (e.g. in [1]) pointed to severe issues in the social sciences/“sciences” and in ideological pseudo-sciences like “gender studies”, around COVID, and (at least as far as journalists and politicians are concerned) environmental science. To this, an interesting recent read—“The Seven Deadly Sins of Psychology” (7DSP) by Chris Chambers. This book gives a damning analysis of many issues in psychology, including various forms of publication bias* and failure to perform adequate replication studies (as well as examples of an absurd attitude towards replication studies among some researchers). It is virtually impossible to take modern psychology seriously in light of 7DSP.**
*Including secondary problems like “p-hacking”. All in all, this publication bias goes a long way to explain the infamous “replication crisis”.
**Assuming that it is factually correct. The scientifically minded are, of course, open to the possibility that it gives a misleading picture. However, the contents match what I have heard and seen, if in smaller doses, from other sources.
Excursion on retractions:
(Executive summary: Bad science is legitimate grounds for retraction, merely being wrong is not.)
7DSP brings up the topic of retractions, in particular a reluctance to retract papers within the psychology community. Here the author, in my opinion, severely overshoots the target by suggesting that already published papers should be retracted merely because replication attempts fail. This brings me to a topic that has long annoyed me—the flawed idea that (experimental*) papers should be retracted willy-nilly. Now, if the authors of a paper find out, post-publication, that they have made such a severe mistake that the paper is invalidated, then a retraction is warranted.** Ditto, if the authors know before publication and still go ahead (in which case we usually enter the area of academic fraud).
*When it comes to e.g. math papers the situation might often be different. If a proof contains a previously undetected error of logic, e.g., then the entire paper might collapse or only be worthwhile in an amended form. If there are no such errors of logic, arithmetic, algebra, whatnot, on the other hand, the results of the paper will almost certainly be true. (While an experimental paper can do everything right and still be wrong.)
**Say, in a medical double-blind study, that an inadvertent unblinding took place or that data for the test and control groups were switched.
However, the idea that a properly written paper about a properly performed experiment/study/whatnot should be retracted merely because it later proves not to describe reality* is ludicrous. A good paper in the experimental sciences does not** proclaim what the truth of reality is—it states that “we did this and we did that, and the result was the following”. This will continue to hold true, even if there are a hundred failed replications—and there is no point in a retraction. Indeed, retracting can have negative consequences down the line, e.g. in that a later meta-study chooses not to include the retracted-for-a-spurious-reason paper, which leads to a weakening of the meta-study.
*Which can happen e.g. for statistical reasons, say, in that a survey was given to a random sample and that this random sample happened to have an unfortunate composition, which caused the answers to the survey to be skewed relative a population wide survey.
**If the paper fails here, there are bigger things to worry about than the details of what went wrong.
Worse yet are retractions for entirely spurious reasons, say that a particular paper causes politically or ideologically motivated protests or (when the retractor is a journal) that one of the authors later engages in other research that causes politically or ideologically motivated protests.
Now, if a researcher says “My new pill can cure cancer!”* and this turns out to be incorrect, this claim should be retracted. Generally, explicit claims (going beyond data), explicit advice, endorsements,** and the like can and should be retracted when later experiences prove them wrong—but not papers.
*As opposed to “I performed a trial as described in my paper X and found the results presented there”. (That “My new pill can cure cancer!” has no place in the paper, it self, even should the results point in that direction, is a given.)
**Note that the publication of a paper in a journal does not constitute an endorsement by the journal in a sense involving the correctness or infallibility of the paper. The true implied claim is typically some combination of “we believe that you will want to read this” and “we have performed the usual vetting, peer-review, etc.”, which is not an extent of endorsement that is sensible to retract just because of e.g. a later replication problem. In fact, off the top of my head, I would argue that there are only three scenarios in which a journal can legitimately retract, namely when (a) the authors have already retracted the paper for a valid reason, (b) there is significant proof that the authors should have retracted but refused to do so, (c) the paper has been revealed to contain deliberate fraud. (The two latter often overlap.)
Excursion on 7DSP:
The book is generally worth reading; however, I caution that the author does not strike me as a great thinker and that the reader often needs to reevaluate the text with regard to what is left out, what other perspectives might apply, whether the reasoning holds, etc. For instance, at some point he poses a few ethical questions of the “is this OK or not OK” type, including e.g. removal of data points—but he fails to specify why the data points were removed, which is a central issue when judging their removal. (Did the data points go against the preferred hypothesis while otherwise being valid, or did they also have some notable problem that made them misleading in the evaluation of said hypothesis?)
[…] in a highly naive manner, or, worse, trust others when they claim-that-science-claims. (Note e.g. [3], [4].) However, even highly critical thinkers, those who think for themselves, who are aware that […]
Mistaken plausibilities | Michael Eriksson's Blog
September 24, 2022 at 6:26 pm
[…] (Also see a few other texts on various related topics, including [1].) […]
Nullius in verba / Follow-up: Who are the science deniers? | Michael Eriksson's Blog
January 30, 2023 at 7:18 pm