Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘fallacies

Explanations and observations

leave a comment »

As I said in a footnote to my previous text, “note that I, here and elsewhere, am open to other explanations that cover a similar set of observations”. In the comparison with many others, this is a critical point: One of the most common reasons for incorrect opinions (especially in, but by no means limited to, politics) is the failure to understand the difference between a hypothesis* that explains the observations, is consistent with the observations, or similar, and a hypothesis that is actually correct, combined with a tendency to jump to conclusions of “X is a matching hypothesis; ergo, X is the matching hypothesis” or “[…]; ergo, X is the truth”. This the more so, cf. below, when a mere “almost explains”,
“is almost consistent”, etc., is allowed.

*The choice of “hypothesis” over “theory” (and e.g. “model”) is a little arbitrary, but I prefer it, as my main focus is on the usually unsystematic and poorly developed ideas of non-specialists (in the field at hand). However, similar errors are quite common even among (real or self-proclaimed) specialists, as with e.g. gender-studies.

The correct hypothesis must* be consistent with the observations, but it need not be the only consistent one and mere consistency does not guarantee that the current consistent candidate is the correct one. (Indeed, even a, in some sense or to some approximation, correct candidate need not be the last word. In the hard sciences, continual refinement over time is the standard expectation.)

*To some approximation. It might e.g. be that the observations contain errors or statistical fluctuations. (This is a reason why good science is not content with matching prior observations, but also makes new predictions and then sees whether the predictions match reality. Generally, a critical difference between science, on the one hand, and proto-science from before the “scientific method”, current pseudo-science, and similar not- or not-quite-science groupings, on the other, is the openness to put assumptions and theoretical deductions to a practical test—and to adapt the theory when a test is failed.)

To take one of my go-to examples, that a woman was fired (or, m.m., not hired, not promoted, whatnot) might be because “My boss discriminated me because I’m a woman!!!”, but it might also be because she was incompetent—or a number of other explanations. On the balance, the likelihood that this self-serving Feminist explanation is correct is comparatively small.

Generally, many errors go back to assuming that a certain event was based on membership in a population group, e.g. the group of women or the group of Blacks, without properly considering the influence of own behavior and other factors on other dimensions.* This applies even when we look at aggregates,** as different groups often show different behaviors, abilities, and other characteristics—and this regardless of the cause of these differences. For instance, before drawing any conclusion from an over-/under-representation*** of some group in some category, beyond “is over-/under-represented”, we must consider the possibility of more immediate causes than “X belongs to group G” (with the common rider “ergo, …”), e.g. that “X displayed behavior B, and while B might be more common in group G than elsewhere, the immediate cause was B—not G”.

*It might be argued that e.g. the group of incompetents is a population group, but, if so, it is one less obvious and one less likely to find self-professed members, and I will ignore this possibility in the following.

**While the above woman was an individual example that need not tell us anything about the big picture, even had she been correct, and where she might also have been incorrect for statistical reasons.

***Relative some naive standard, e.g. proportion of the overall population. More generally, over-/under-representation might disappear if another standard is used, e.g. in that the proportion of convicts might seem unduly high for one group when we look at its proportion of the overall population but not when we look at its proportion of criminals. What standard is the most reasonable will usually depend on the purpose of comparison. To boot, even a naive standard might be open to alternatives, e.g. through variations in age demographics and local demographics.

A notable family of examples stems from the confusion of correlation with causality. If X tends to go hand in hand with Y, it is not correct to assume that X causes Y. Yes, if X causes Y this might explain the observations at hand, but it might be that Y causes X, that each has a causal effect on the other,* that both are independently caused by Z, etc.—and these scenarios, too, might explain the observations at hand. We might even have cases where there is a causality from X to Y but this causality is drowned out in the big picture: Consider the observation that (a) those buried alive tend to die, (b) the cemeteries are filled with dead humans who are buried. These observations are consistent with the (posthumorous) hypothesis that there has been a large-scale burying of living humans; it might certainly be that some very few of these were buried alive; and it might even be that, under some exceptional circumstances, e.g. in a genocidal Communist dictatorship, a great number of live burials took place. However, chances are that all or almost all those buried at the nearest cemetery died first and were buried because they had died—not first buried and then dying because they had been buried.

*Height and weight is a good example: Those taller tend to be naturally heavier (“vertical” heaviness), but someone malnourished might see his growth stunted and be shorter than he would have been, had he been better nourished, in which case “horizontal” heaviness (or lack thereof) affects height.

Another notable family stems from misinterpreting claims like “observation X is consistent with hypothesis Y” (e.g. that certain observations from an autopsy are consistent with the hypothesis that the cause of death was strangulation). This does not imply that Y is true, merely that Y cannot, at this time, be ruled out. In contrast, the claim that “X is not consistent with Y” would rule Y out. (With reservations for mistakes, confounding factors, and whatnot.) More generally, demonstrations of consistency can make a claim more plausible, but cannot fundamentally prove the claim, which is were the idea of “falsifiability” comes in.* The typical reference to Sherlock Holmes and the dog that did not bark is a good illustration—the fact that the dog did not bark was inconsistent with some explanations but not with others.**

*As a counterpoint, some cases of apparent-to-the-layman falsifications are not falsifications. For example, if a study claims that it “failed to show X”, it does not follow that it “showed not-X”. It might even be that the study shows reasonably strong support for X—just support that fell short of the typical, semi-arbitrary, bars used to measure when a finding is considered statistically significant, say, that a “p-value” of 0.05 or less has been found (0.10, e.g., falls well short of this bar, but is still a strong indication).

**Notably, depending on whether the perpetrator was presumed to be someone strange or someone familiar to the dog. Here we see another complication, namely that the one might fail to see an inconsistency that the other does see, notably in terms of consequences. Indeed, one of my own main complaints about e.g. politicians is that they seem to miss a great many of the likely consequences of their suggestions. (This has also been a common annoyance in the office, in that someone makes a suggestion that I suspect will do more harm than good, that the majority fails to listen to my warnings, and that no-one remembers my warnings, when I am, in due time, proven right.)

More generally, Sherlock Holmes differed from the competition through a lesser willingness to accept an explanation that explained almost all observations resp. was consistent with almost all restrictions involved.* The question, then, is what does someone do when some previously unknown observations, facts, arguments, whatnot contradict a pet hypothesis? (Or, worse, when someone else points to contradictory evidence that has been swept under the carpet?) The rational attitude is to take a hard look, with an open mind, at the hypothesis and these observations (etc.) with an eye at determining where the problem is. It might now be e.g. that some contradictions were spurious, but it might also be that the hypothesis needs to be refined or rejected. Too many, however, and especially in Leftist camps, just bite down on the hypothesis, ignore/distort/defame the contradictions, or, even, try to turn the contradictions into proof of the hypothesis… The latter in at least two variations, which can be roughly described as willful ignorance/obstructionism and/or bending the facts to fit the hypothesis** resp. a “damned, if you do; damned, if you don’t” double-bind***.

*Similarly, as a software developer, I have learned to look hard at even seemingly small issues of mis- or unexpected behavior from the software in development. That virtual blip on the radar screen might be something small, but it is hardly ever nothing and sometimes it, at the risk of mixing metaphors, is the tip of an iceberg capable of sinking the Titanic. Any known deviation from the intended behavior is bad.

**E.g. “Even small kids show signs of male and female behaviors?!? Gender stereotyping must be even more powerful and begin even earlier than we thought! We must redouble our efforts!”, instead of a sane “[…]?!? Hmm, maybe there might be something biological to this, after all.”)

***E.g. “Either you agree with us that male privilege [White privilege, White Supremacy, whatnot] is a massive problem or your denial proves us right by exemplifying male privilege!!!”.

Excursion on competing “good” hypotheses:
It is often the case that there are more than one hypothesis that explains what should be explained, is consistent with what is known, etc. This includes cases where the one hypothesis* is a suggested refinement of the other (as with e.g. Newton’s laws of motion being superseded by Einstein’s) and where the one uses a significantly different model from the other (as with e.g. the switch from geocentrism to heliocentrism).** Depending on the circumstances at hand, the choice might be arbitrary, require additional data, be based on factors like ease of use, and/or be a candidate for the heuristic of Occam’s Razor. A key point, however, is that the success of actions suggested based on such a hypothesis can be highly dependent on how correct the hypothesis is, not just on how well it matches observations, and that corresponding efforts must be spent on making sure before large-scale or undoable action is taken. For instance, the hypothesis that poor educational outcomes in a certain group are caused by “being underprivileged” is not absurd a priori, but the attempts to improve school results by remedies focused on “privilege” have never worked well (or never well beyond a certain low limit). As an untested model, this explanation might seem tempting, but the proof of the pudding is in the eating. Still, the main explanations used by the Left, today, decades after the first experiments and after decades of wasted money and other resources, are still in the extended “underprivileged” family—and the suggested solutions broadly amount to more of the same and to throwing good money after bad.

*Here, more often, scientific theory/model.

**While the approaches of Einstein and Newton were different, and while e.g. their equations of motion look different, their results are virtually identical for sufficiently “classical” conditions and Newton’s equations can be derived as limit cases from Einstein’s. In contrast, geocentrism and heliocentrism are fundamentally different and contradictory, with the former requiring convoluted corrections to predict reality. (Note that a seeming equivalency in e.g. explanatory power might depend on the time of comparison. For instance, a geocentric or Newtonian model suggested today would fall well short of explaining known observations.)

Excursion on “What’s the next number?”:
A similar issue is found with a particularly idiotic family of pseudo-math problems, namely where some few numbers are given and the problem solver is asked to find the next number. The issue here is that there are always multiple solutions and that “Which solution is the best?” is a matter of taste. In some cases, this “best” solution might seem obvious, as with 1, 2, 3, ? and the answer 4. However, even here, other solutions exist, e.g. 5 (assuming that 1 and 2 are the initial values of a Fibonacci-style sequence) and 0 (the result of putting n = 4 in the function f(n) = (4 – n) (1/6 (2 – n) (3 – n) – (1 – n) (3 – n) + 3/2 (1 – n) (2 – n)), deliberately constructed to give the results 1, 2, 3, 0 on the inputs 1, 2, 3, 4).* In other cases, the ambiguity is intolerably high and/or the solution hinges on knowing some obscure fact. For instance, should 0, 7, 0, 7, ? give 0 (assuming that 0 and 7 alternate for the duration) or 1 (assuming that these are digits from 2^(-1/2) = 0.7071…**) or possibly something different yet.

*This particular solution is better reached through simply taking n modulo 4, which leads to an ever repeating 1, 2, 3, 0, 1, 2, 3, 0, … However, my original intent was to demonstrate how a similar type of polynomial construction can always be made to match a series, which indirectly led me to the above. See excursion.

**A number that will be easily recognizable to most mathematicians and many with some mathematical interest, but hardly to the “average Joe”.

Excursion on constructing polynomials to match any series:
For a sequence like a, b, c, …, simply take a series of terms in n where all but one of the terms equal zero for each position in the sequence and adjust the non-zero term to match the correct value. For instance, to create a sequence where n = 1 -> a, n = 2 -> b, n = 3 -> c, we can take a first “raw” term of (2 – n) (3 – n), which is 0 for n = 2 and n = 3. It is also 2 for n = 1 and a first “cooked” term of a/2 (2 – n) (3 – n) now gives the correct value for n = 1. Proceed in the same manner for the second and third term to get b and c for n = 2 resp. n = 3, with the idea that each term is 0 for all but one n and gives the right value for that n. The overall result is a/2 (2 – n) (3 – n) – b (1 – n) (3 – n) + c/2 (1 – n) (2 – n). The same idea can be applied to e.g. a, b, c, d, e, ? and a, b, ?, c, d.

This is what I originally did with 1, 2, 3 in the previous excursion, assuming that I would get something other than 4 for n = 4. However, the resulting polynomial annoyingly* reduced to n, and n = 4 then, obviously, still resulted in 4, which caused me to go with the workaround of forcing a 0 for n = 4. (Note how the shape of the polynomial is a factor that is 0 for n = 4, times an expression that is a modified** version of a polynomial as constructed above, with a/b/c = 1/2/3.)

*But, with hindsight, predictably: it can be shown that any two nth-degree polynomials that are identical in n + 1 different points must be identical throughout, which implies that any 2nd-degree polynomial with p(1) = 1, p(2) = 2, and p(3) = 3 must be identical to n = 0n^2 + n + 0.

**The same procedure of creation can be used, but it must now be corrected for the (4 – n) term, giving us a first “cooked” term of a/6 (2 – n) (3 – n) or a third of the original, as 4 – 1 = 3. The other terms are similarly scaled by 1/2 = 1 / (4 – 2) and (with no effect) 1 = 1 / (4 – 3).

Excursion on other issues:
There are of course a great many other (but off topic) reasons why someone could be wrong. A particularly interesting one is mistaking some else’s inter- or extrapolation for truth, taking an “artist’s rendering” of something to be more realistic than it is, taking historical fiction to be more historically correct than it is, or similar. For instance, I am currently reading/skimming “The World Encyclopedia of Dinosaurs & Prehistoric Creatures”, which contains a great many illustrations, with a fair amount of detail, and fanciful colors—but these illustrations are often accompanied by claims like “is known only from the left half of the lower jaw” (specific quote from the entry for Sarcolestes, with reservations for errors in transcription). Anyone who trusts the accompanying illustration would risk being laughably wrong, and even the less fanciful conjectures mentioned, e.g. that Sarcolestes was an ankylosaurid, might turn out to be incorrect. Indeed, its name, “flesh thief” goes back to earlier speculation that it was a meat eater, while the current belief points to a “slow-moving plant-eating animal”.

Advertisement

Written by michaeleriksson

December 18, 2022 at 11:53 pm

Fallacy fallacies: Introduction and the “small savings count” fallacy fallacy

with 2 comments

Earlier today, I wrote:

A secondary motivation is that being somewhat price conscious might bring a considerable gain over the sum of all products, even when it does not do so over a single product. Shave, say, 10 percent of the yearly grocery bill and we are talking something noticeable.

This brings me to a text series that have long lingered in my backlog (and will likely only be written by-and-by): That something is invoked as a fallacy in a fallacious manner, for which I suggest the term “fallacy fallacy”.*

*Off the top of my head, I have at least two further entries, namely the “slippery slope” fallacy fallacy and the “etymological” fallacy fallacy. There might or might not be more in my backlog and or more that will appear over time. The fallacy fallacy discussed below, respectively the claimed underlying fallacy, has no name that I am aware of, and I go with the “small savings count” fallacy and fallacy fallacy for the sake of having some label.

Specifically, I recall taking some sort of psychology class during my long ago business studies, when the professor almost triumphantly began to expound on how it was not important to pay attention to the price of milk, because even 20 percent of 1 Euro is only 20 cent.* It was a fallacy to think otherwise and a fallacy to waste time on milk prices.

*Translated into approximate current prices and Euro, and with reservations for exact details (the original discussion might have been in Swedish crowns in 1995). The principle should be clear. Note that Swedish and German milk is usually sold in 1-liter cartons.

Now, as with the above single product, this might well hold true if we look at that single product and/or at a single purchase, if cheap enough—and if he had instead spoken of a principle of optimization, that we should optimize where it counts,* then I would even have agreed. Ditto, if he had pointed to e.g. the need to consider opportunity costs (and not just the savings).** Ditto, if he had focused on a one-off purchase.*** He either did not or only did so very cursorily with the sole point of pushing his thesis that “small savings count” is a fallacy.

*If someone wants to reduce costs, he should first look at where his main spendings are and what might be cut there. If someone wants to make a program run faster, he should first look at where the program spends most time and what might be cut there. Etc.

**Ten minutes extra in a car might bring greater costs than the milk savings through each of gasoline costs, wear and tear on the car, and own time lost. Then we have factors like the increased accident risk and air pollution.

***As will be clear from the below, even a larger one-off purchase might be a smaller source of savings than a smaller again-and-again purchase.

Cut 20 percent of 1 Euro once, and, yes, that is only 20 cent—which is not normally* worth the trouble of, say, going to another store for a better price on milk. However, even for just milk, the calculation might be different over a longer time. Say that someone averages a liter every two days over a life of 80 post-weaning years.** Ignoring leap years and assuming inflation-corrected prices, we then have a total of 80 x 365/2 x 1 = 14600 (Euro). A savings of 20 percent now amounts to 2920 Euro, and even one of 10 percent reaches 1460 Euro. Suddenly, things look different.

*Exceptions might exist. Consider someone with only 80 cents left the day before the next pay check.

**Here I assume that the life stages even out in terms of who-buys-for-whom.

In another dimension, milk is rarely the only thing bought, and if someone spends 50 Euro per week on groceries, etc., this gives 10 resp. 5 Euro per week in savings at 20 resp. 10 percent. This is then (slightly more than) 520 resp. 260 Euro per year—and then take that times 80…

The savings on milk is, of course, to be contrasted with any additional costs, e.g. time for research, additional travel, whatnot—but these are often trivial. For instance, someone can simply take note that store A sells milk for 1 Euro and store B for 80 cents and then make a point to try to buy milk when at store B anyway. Ditto other products. Indeed, it will very often be enough to note that store A sells several brands of milk and that these are differently priced, or to simply bend down from the eye-height shelf to the bottom shelf, where prices tend to be lower. (Also see excursion on heuristics.) Spending half-an-hour to research milk prices and then driving for another half-an-hour to the right store, for a one time purchase of a single 1-liter carton, is nonsensical—but who has suggested that for something like milk? (On the other hand, for e.g. buying a car, the same efforts and costs might be extremely well spent.)

Of course, arguments in yet other dimensions might apply, as with, cf. [1], sending the grocery store the right signal. And, yes, if a single someone does it, it is almost certainly pointless—but if many do…

Here we see that the underlying principle holds more generally: that something is pointless when done in the small does not imply that it is pointless. Taking a single step of a journey, reading/writing a single page of a book, whatnot, might not bring that much—but steps plural and pages plural can be a very different matter. Similarly, to revisit program optimization, say that we have two lines of code, the first of which takes a thousand times longer to execute than the second. Surely, we should focus our efforts on the first? Not necessarily: it might well be that this line is executed once a day, while the executions of the other go into the thousands every single hour.

Excursion on the varying degree of fallaciousness:
As I say above, a fallacy fallacy is something “invoked as a fallacy in a fallacious manner”. It does not automatically follow that any invocation poses a fallacy fallacy. Often, we need to look at the exact circumstances; often, we need to keep in mind that things not yet known might be deciding for whether “fallacy fallacy” applies. However, it is noteworthy that many fallacy fallacies arise exactly when the invoker fails to consider the circumstances and commits the meta-fallacy-fallacy “sometimes a fallacy; always a fallacy”—arguably, including the above, not too bright,* professor.

*If he had left it at the one fallacy fallacy, I might have given him the benefit of the doubt, but he failed to respond with reason when several students, including yours truly, tried to point out issues like the above, and he otherwise failed to impress.

Excursion on the “small savings do not count” fallacy:
Why not just suggest a “small savings do not count” fallacy over the current “small savings count” fallacy fallacy? Logically, they would be almost the same and the former is certainly less complicated. The point is that the professor, who was the origin of my take, pushed for a “small savings count” fallacy, and did so in a context of consumer irrationality and psychology, thereby setting the scene.

Excursion on smaller savings:
An overlapping error, and one that might partially have been behind the professor’s take, is to assume that smaller savings do not count when larger can be achieved. Let us say that making one cut saves a thousand Euro a year and another a hundred Euro. It might or might not be that the smaller savings is too little to bother with for someone (but take it times 80…); however, if money is an issue, proper optimization only implies that we should look at the larger savings first—not that we should only look at the larger savings. (With the same applying, m.m., in other areas, e.g. program optimization.) It might even be that the smaller savings should be implemented first, because the implementation is that much faster, requires that much less up-front effort, whatnot. For instance, someone can switch milk brands today, but changing electricity provider might take months and switching to a more fuel-efficient car might not pay off until the old car has reached a certain length of ownership.

Of course, speaking of cars, if the professor had argued that “it is irrational to pinch pennies when it comes to milk but miss thousands of Euros in savings through not doing a price comparison on cars”, he would have had a much better point and been met with much more sympathy from me. Again, he did not.

Excursion on money-saving heuristics:
Finding the optimal price for each and every product would be a ton of work. However, much can be reached by just using some few heuristic, like having a look at the bottom shelf (cf. above). Others include picking the right store (I prefer Aldi to Rewe, e.g.), being on the look-out for reduced prices,* preferring house and generic brands to “brand brands” (as with the “Ja!” brand repeatedly mentioned in [1]), and buying in bulk. Quality-wise, the compromises are comparatively small and the biggest difference between the high- and low-end brands is often just the price. Indeed, “Trader Joe’s” appears to be considered a high-markup brand in the U.S., but my encounters in Germany has been with products sold by Aldi, sometimes in what could be surplus sell-offs, and “Trader Joe’s” left the impression of being a cheap brand to me, before I heard very different references relative the U.S.**

*While keeping in mind that the reduced price in an expensive store might still exceed the regular price in a cheap store.

**I cannot guarantee, of course, that the quality and selection is the same as in the U.S., but I have no reason to expect significant differences either. The product selection found at Aldi certainly tends to be “American”, with e.g. peanut butter and cup-cakes often occurring.

Excursion on poverty and spending:
I have many times heard the claim that poor earners often spend more money than bigger earners, e.g. through a brand obsession or a wish for “cool” entertainment electronics,* and a failure to consider the benefit of small savings can certainly contribute to such a situation. Cutting food (or electronics) costs by smarter buying can make a significant difference in the bank account of especially poor earners.

*To some degree, this amounts to the stupid both having problems finding good jobs and being wasteful. For instance, I do not have a 40-inch TV, an expensive stereo, an iPhone, and whatnot. What I do have is a few notebooks that do almost everything for me, a very low-end smart phone, and some minor other electronics for travel. Historically, it has been “notebook” singular, and I would get by with one even today. (Not counting a soundbar that I bought solely to serve as a noise source to help drown out construction works and would otherwise not have needed.)

Written by michaeleriksson

November 21, 2022 at 11:35 pm

The fallacy of the superior observer

leave a comment »

Preamble: This item has been on my backlog for a long time, in part because giving it a fair treatment would require a considerable amount of effort, including for gathering proper examples. Below, I give an abbreviated and a little simplistic treatment, in order to get rid of the backlog item. The general idea should still be clear.

Time and again, I see someone* trying to observe, analyze, or even psycho-analyze the intentions, behavior, whatnot, of someone else in a very smug and superior manner, somewhat like a stereotypical** anthropologist in a jungle village, displaying an attitude of “I observe you; ergo, I am superior to you”, “I observe you; ergo, I understand the world better than you do”, “I observe you; ergo, I understand you and/or your situation better than you do”, or similar. These “ergos”, however, are fallacies. Firstly, the respective conclusion, as such, does not hold.*** Not only is it a non sequitur, but a contradiction can easily be created by having the parties simultaneously (or at different times) observe each other.**** Secondly, the facts at hand are often, but by no means always, the reverse, with the subjects being smarter and/or better informed than the observers.

*I do not claim to be innocent of the same type of analysis, but, attempting to look back, I hope to have avoided the fallacy part, which lies in flawed reactions and conclusions.

**I leave unstated how common this behavior is among real anthropologists in jungle villages.

***Note that bad logic remains bad logic, even should it land at the truth, just like a still-standing watch still stands still even when it shows the right time (as it, twice a day, proverbially does). Also note the major difference between “I observe you; ergo, I am superior to you” and “I observed you do a handful of stupid things and, from your behavior, I am inclined to see myself as the superior”.

****A situation that can easily arise naturally, even for a more formal setting. Consider e.g. Student A trying to earn extra money through participation as a subject for Study X, where Student B has the task to observe his behavior as part of his course requirements—while Student B tries to earn extra money through Study Y, where Student A is the observer. In more informal settings such cases abound, as with those who play “Psych 101” on forums and usually reveal more about themselves than about their victims.

An illustrative case is an anecdote that I read somewhere (approximately paraphrased from memory and with reservations for details):

A psychologist is giving a personality test to a group of engineers.

Psychologist: Any questions?

Engineer: Should we use the same personality on both sides of the page?

Psychologist: You are supposed to answer truthfully!!!!

Engineer: How stupid do you think that we are?

Here, firstly, the psychologist seems* to commit the fallacy, by seeing the engineers as test subjects, who are to be obedient, cooperative, and analyzable, with no regard for their own interests,** much like school children filling in a similar survey—inferior, because they are observed; the psychologist superior, because he observes. Secondly, the engineer seem to believe himself well ahead of the psychologist in terms of understanding the situation and/or of general intellectual capabilities, which is premature.*** If the engineer also applies an “observer mentality”, we actually have two parties simultaneously committing the fallacy against each other. (I suspect that this reciprocal fallacy is somewhat common in certain circles.)

*I can only speculate, and a common contributor to the fallacy and/or a common result of the fallacy is to take speculation to be the truth. (However, for convenience, I will skip the “seems” and whatnots below.)

**Various tests and surveys can reveal more about an individual than he wishes to be known, and if the wrong entity sees the information, and/or if the information is not sufficiently anonymized, this can have negative consequences, e.g. that a promotion goes to someone else, because HR sees a certain character trait as negative.

***From what I have seen of engineers and psychologist, he might well be correct, but the belief is premature, even should it be correct—unless it also draws on earlier interactions and whatnots.

Other typical examples include that guy on a forum who likes to “Psych 101” his co-debaters, their intentions, and their whatnots, often while being entirely wrong;* the Leftist, especially Feminist, ideologues who write papers on groups that they dislike, while assuming that these groups see the world the wrong way, are naive, whatnot;** and, indeed, many anthropologist, psychologists, etc., who try to analyze societal behavior.

*In those cases where I have been on the receiving end and can judge the truth.

**In reality, it is usually the other way around.

An impulse to finally get this item done was the recent mention ([1]) of a paper that “explore[s] how September 11 and subsequent events have been experienced, constructed, and narrated by African American women, primarily from working-class and low-income backgrounds.”, which not only seems to be an absurd topic,* but also stands a good chance of committing the fallacy. (Whether it does/the authors do, I do not know for sure. However, I have seen similar formulations used by those with the wrong attitude before.)

*While I do not think highly of research into “constructed” and “narrated” in general, here a more interesting and potentially legitimate choice would have been a compare and contrast between different groups, e.g. whether men and women, Whites and Blacks, high- and low-income subjects, whatnot, have different views of the events, experienced the events differently, etc.

Finally, as an exercise for the reader: If a psychoanalyst engages in self-observation/-analysis, should we expect the result to be a superiority complex, an inferiority complex, or both?

Excursion on borderline cases:
It can often be hard to draw borders, e.g. between the fallacy and attempts at manipulation of third parties, horribly misguided speculation, and similar. For example, I once saw a (likely German, but set in China) news clip, which featured a well-dressed woman on a bicycle—and a voice-over on how this woman would consider herself something better than the rest of the persons present. However, if the makers of the news clip had even exchanged a single word with this woman, it was not shown on screen—and neither was any other act of hers than riding a bicycle. (And, no, she was not someone famous, nor anyone of any major concern for the clip.) This was certainly poor and unethical journalism, to the point that a firing seems warranted, but was it also an example of the fallacy or was the cause something else?

(If it was an example of the fallacy, it was also hypocrisy beyond belief.)

Excursion on being in charge:
In some cases, e.g. with psychologists performing a study, there can be overlap with another fallacy, namely that the one who is in charge is automatically superior in general and/or that he* is in charge because he is superior rather than, say, by coincidence or because he happens to be on his home turf, e.g. in an office where he wields some power in the name of his employer, while the cards would be reversed in his counterpart’s office. Remember, in particular, what they say about small people who are given a little bit of power.

*But note that the problem, in my impression, is much more common among women, including the type who leads a three-person department and considers herself a big shot, while she is actually small fry by any reasonable standard. This to the point that I wrote the first draft of this excursion using “she” and “her” over the generic “he”.

Excursion on observers revealing themselves and going full circle:
As I note above, the “Psych 101”-ers usually reveal more about themselves than about others. This is a potential issue with observers in general, even among those who do not commit the fallacy (although, it might be more common among them), and including me. What we relay to others is almost invariably colored by our perceptions, our priorities, our attempts to guide the perceptions we leave with others,* whatnot, and this allows others to draw conclusions. Of course, their conclusions will in turn be colored by their perceptions (etc.), which can make their conclusions misguided and tell us more about them. And so on. Indeed, one of the issues with “Psych 101”-ers is how often they try to, so to speak, describe the color filters of their victims but end up describing their own color filters instead.

*We all do, to some degree, and that too is something that might inadvertently reveal things about us. For instance, I have a perfectionist drive, but am usually forced to write texts well short of perfect, e.g. for reasons of time or priority. This, and the (maybe, irrationally perceived) risk that the reader will believe that I fail to spot the flaws, often irks me to the point that I add a comment on the issue, as e.g. with the “preamble” at the beginning of this text—even when I know that very few others would have felt the need to do so. (Many of my footnotes are also caused by this perfectionist drive, but in a more immediate manner. Here it is not a matter of what the reader might think, beyond “Too many bloody footnotes!”, but of a wish for completeness and thoroughness.)

Written by michaeleriksson

November 4, 2022 at 1:22 pm

Not perfect; ergo, useless

with one comment

Quite a few odd human behaviors, especially on the political Left, could be explained by assuming a “not perfect; ergo, useless” principle, be it as a logical fallacy or as an intellectually dishonest line of pseudo-argumentation. (To the latter, I note that this principle seems to be applied hypocritically to the ideas of opponents but not to own ideas.)

A typical use is to find some flaw or disadvantage and use it to discredit the whole. (If a small flaw, usually combined with rhetorical exaggeration.) This without weighing the overall pros-and-cons, without acknowledging similar flaws in other ideas, products, whatnot, and without considering whether the flaw is repairable*. Consider e.g. an infomercial that I watched at a tender age: A hyper-energetic salesman ran around comparing “his” fitness product to the competition’s:** “The X is great—but, unlike my product, you can’t stow it under the bed!”, “The Y is great—but twice as expensive!”, “The Z is great—but not portable!”, etc., without comparing stowability, price, portability, and whatnot, over all products. It was simply not a fair comparison or an attempt to find the best choice, just a series of excuses to “prove” that any given competing product was inferior to the one sold by him.

*As a good counter-example, complicated mathematical proofs often turn out to contain defects. While these are sometimes fatal, they are often repairable and often the proof can still stand by limiting the conclusion to a subset of the original scope. Wiles’s proof of Fermat’s wild claim is a good example.

**This was likely more than 30 years ago, so I cannot vouch for the exact comparisons (let alone formulations), but the idea should be clear.

Or consider the example that was the impulse to write this text: In Hans Fallada’s Kleiner Mann — was nun?, the protagonist (Pinneberg) tries to get a payment from an insurance company, is met with an unexpected request for must-be-provided-before-payout documents, and inquires at some type of supervisory agency whether these were justified. He obtains and sends all the documents in a batch to the insurance company (in parallel). Now, some of these document were obtainable sooner (e.g. a birth certificate); others later. Pinneberg’s actions are then limited by the availability of the last of the documents that the insurance company requested. When the insurance company replies to the supervisory agency, it, among other things, tries to pawn off the delay on Pinneberg: he had the birth certificate at date X and sent it at date Y; ergo, the delay from date X to date Y was his own fault.*

*The book is not sufficiently detailed for me to judge whether these documents were reasonable and exactly how the blame is to be divided. However, this particular reasoning remains faulty, as Pinneberg could not have expected more than very marginally faster treatment through sending in a partial set of documents at an earlier time, and as the extra costs might have been unconscionable. (Pinneberg was a low earner with wife and child in the depression era, and want of money, unexpected expenses, risk of unemployment, etc., were constant issues.)

A more common example is IQ, which (among many other invalid attacks) is often met by e.g. variations of “there are poor high-IQ individuals; ergo, IQ is useless”, “the correlation between scholastic achievement and IQ is not perfect; ergo, IQ is useless”, “IQ is only X% heritable; ergo, we should ignore heritability of IQ”, …*

*Note the difference between these and perfectly legitimate and correct ones, e.g. “there are poor high-IQ individuals; ergo, IQ is not the sole determinant of wealth and income”. These, however, appear to be rarer in politics.

The last points to another common example: nature vs. nurture: too many* seem to think that because “nature” only explains some portion of individual** variation, it can or should be ignored entirely. Note e.g. calls for very high female quotas even in absurd areas, as with a 50% quota within a Conservative party, or various forms of distortive U.S. college recruiting to “help minorities”, unless these minorities happen to be Jewish or Asian. (Or male, for that matter.)

*Even among those who do not blindly deny any non-trivial influence of nature at all, whose position is solidly refuted by the biological sciences. It is rarely clear to me which school any given debater belongs to, which makes the division and the giving of examples tricky.

**This also relates to another fallacy: assuming that a small difference (in e.g. characteristics or outcomes) between typical individual members of different groups implies small group differences. This is sometimes the case, but not always, and especially not on the tails of a distribution.

The possibly paramount example, however, is postmodernism and its take on knowledge and science (logic, whatnot):* because science cannot give us perfect knowledge, science is a waste of time (or, even, quackery). Worse, even attitudes like “because we cannot have perfect knowledge, all hypotheses are equal”, “[…], we can decide what the truth is”, “[…], we can each have our own truth”, are common in, at least, the political and pseudo-academical use. However, even absent perfect knowledge, science can achieve much, say, finding what hypotheses are likely resp. unlikely, what models are good and bad at approximating the results from the unknown “true” model, or increasingly better approximations of various truths. Certainly, I would not be writing this text on a computer had it not been for science and the practical work done based on science.

*At least, as applied practically and/or by those less insightful. I cannot rule out that some brighter theorists have a much more nuanced view.

Excursion on fatal flaws:
Of course, there are cases when a flaw is fatal enough that the whole or most of the whole must be given up. A good example is, again, nature–nurture: if someone wants to base policy on a “nurture only” assumption, any non-trivial “nature” component could invalidate the policy.* A good family of examples is “yes, X would be great, but we cannot afford it”.

*And vice versa, but I cannot recall anyone basing policy on “nature only” in today’s world, while a “nurture only” or a “too little nature to bother with” assumption is ubiquitous. Cf. above.

Excursion on nature vs. nurture and removed variability:
A common error is to assume that the relative influence of “nature” and “nurture” is fix, which is not the case: both depend strongly on how much variability is present. Notably, if we remove variability from “nurture”, which appears to be the big policy goal for many on the Left, then the variability of “nature” will be relatively more important—and when we look at group outcomes, where the individual variation through chance evens out, then “nature” will increasingly be the dominant determinant. In other words, if “nature” (strictly hypothetically) could have been mostly ignored in the Sweden of 1920, a century of Leftist hyper-egalitarianism would almost certainly have made it quite important today. Similarly, note how attempts at removing “cultural bias” from IQ tests have not eliminated the many group differences in test results, of which it allegedly was the cause. Indeed, the group differences have sometimes even grown larger, because the influence of “culture”/“nurture” has been diminished in favor of “nature”.

Written by michaeleriksson

July 24, 2020 at 3:58 pm

The fellow-traveler fallacy

with 10 comments

I am currently writing a shorter post on the use of the word “feminism”. As a result of my contemplations, I suggest the existence of a “fellow-traveler fallacy” (based on the originally Soviet concept of a fellow traveler and its later generalizations):

If a group of travelers take a ship from London to New York, can we assume that they share the same eventual destination? No: One might remain in New York indefinitely. Another might go back to London a week later. Yet another might take a different ship to cruise the Caribbean. Yet another might travel across the continent to Los Angeles. Yet another might move on to Anchorage. For some time, they are fellow travelers, but not because they wanted to reach the same destination: They merely had a part of the road in common, before their paths diverged.

During their time together, they might very well have enjoyed each others company, they might have helped each other, they might have even have collaborated to survive a ship-wreck. This, however, does not imply that their destinies and interests are forever bound to each other. Those who did not intend to remain in New York would have been grossly mistreated if forced to do so. The one heading for the Caribbean could hardly have been expected to be pleased about going to Anchorage instead. For the one to entrust his suit-case to the other (and not to collect it again in New York) would be silly. Etc. Even this does not directly consider the underlying reasons for the respective journey: What if the one was returning from a vacation and the other just starting his? What if one was going to a conference, another visiting a relative, and a third taking up a new position? With factors like these in the mix, even people who are fellow travelers through-out the journey might have so different objectives that grouping them together becomes misleading.

By analogy, it is a fallacy to assume that people who at some point have the same current goals and/or strive in the same current direction will continue to do so, will remain allies, can be permanently grouped together, whatnot—and, above all, to allow one of the temporary fellow travelers to permanently speak for the entire group. Similarly, if there is disagreement about methods, a status as fellow traveler is not necessarily a good thing: If the one buys a plane ticket to Cuba and the other, even for the exact same reason, forces a plane to go to Cuba at gun point, are they really the same?

An easily understood example is how the U.S. and the Soviet Union were close allies during WWII, only to become bitter enemies for the rest of the latter’s existence—they traveled together for a short span, forced by external circumstance, and then went their own, very different, ways for more than four decades. Ideologically, they were as night and day; but as long as they had a common all-important goal (i.e. defeating the Axis powers) they still fought on the same side. Those naive or uninformed enough to commit the fallacy by expecting a post-WWII friendship were severely disappointed; those who actually saw the alliance for what it was, an unnatural union of natural enemies to defeat a common enemy, were not surprised. (This is also a good example of why the saying “my enemy’s enemy is my friend” (a) is at best a semi-truth, (b) gives no guarantees once the common enemy is defeated.)

Most examples, however, are likely to be less obvious (and, therefore, more dangerous). Consider e.g. how the goals of feminism might be almost identical to those of a true equality movement when women are considerably disadvantaged, only to grow further and further apart as female disadvantages are removed or supplanted by new privileges, while male disadvantages remain or are increased and privileges removed, until, eventually, they are on opposing sides. Similarly, a classical liberal or a libertarian might have a considerable overlap with feminism in the original situation, only to end up on opposing sides as the situation changes.

Other potential examples include stretches of classical liberals and social-democrats or social-democrats and communists going hand-in-hand at various times and in various countries, as well as many other political cooperations or “common enemy”/“common goal” situations—even groups like vegetarians-for-health-reasons and vegetarians-for-animal-rights-reasons could conceivably be relevant. I am a little loath to be more specific and definite, because “fellow traveling”, in and by it self, does not automatically imply that the fallacy is present. To boot, even when the fallacy does occur, it will not necessarily affect the majority. (Feminism, in contrast, is an example where the fallacy is extremely common.)

As a sub-category of this fallacy, the temporary fellow travelers who fail to understand that later destinations will diverge, or who are apologetic for misbehavior by their current fellow travelers, are an ample source of “useful idiots”. (Feminism, again, provides many examples.) This becomes a great danger when apologeticism extends to methods, not just opinions, as when lies, censorship, or even violence is tolerated because “they are on our side”, “it helps our cause”, or similar, by someone who would condemn the exact same actions from a group that is not a current fellow traveler.

Another potential sub-category is those that identify some group as fellow travelers, fail to consider the fallacy, and then start to adopt opinions that they “should” have in order to conform further with the fellow travelers, leading themselves astray through committing a second fallacy. (Cf. parts of two older posts: [1], [2])

Written by michaeleriksson

June 9, 2018 at 6:47 am

Group characteristics vs. individual variation

with 4 comments

In various discussions, in particular with the PC crowd, I have found at least two recurring errors concerning groups vs. individuals that are worthy of discussion:

Firstly, a highly naive conclusion (based on a correct premise): Individual variation is often greater than group differences (correct); ergo, group differences are irrelevant (very wrong), of marginal importance (very wrong), or only discussed by those who are sexist, racist, or similar (extremely wrong—and, frankly, a misstep that I find hard to comprehend).

To take a recent example from a Swedish discussione (for technical reasons, the means, but not the place, of emphasis have been altered):

I princip alla fysiologiska och psykologiska egenskaper och talanger är normalfördelade i populationen. Om man väljer att skikta det statistiska materialet utifrån kön kommer man att finna att normalfördelningskurvorna ofta skiljer sig åt mellan könen, men man kommer också att finna att de individuella variationerna är större än variationerna mellan könen.
M.a.o.: det är korkat att hävda att män är si och kvinnor så. Lika korkat som att hävda att ”män är långa och kvinnor korta”.
Vill ni tillhöra den korkade skaran kan ni förstås fortsätta att argumentera på detta vis.

(In principle, all physiological and psychological characteristics and talents follow a normal-distribution in the population. If one chooses to compare [original phrasing slightly ambiguous and not translatable] based on sex, one will find that the [distributions] are often different between the sexes, but one will also find that the individual variations are greater than the variations between the sexes.
In other words: it is stupid to claim that men are this and women that. Just as stupid as claiming that “men are tall and women short”.
If you want to belong to the stupid flock, you can obviously continue to argument like this.)

(Note here the strawman of using an unusually strong polarization: The far more typical opinions and statements would be of the type “men are taller than women”. In more detail, the claim is not correct that all characteristics follow a normal-distribution; however, very many do follow a distribution with similar characteristics—at least close to the average.)

Now, why is this line of argumentation, at best, specious? Broadly speaking, even when individual variations are large, the group variations can have a very considerable effect on group outcomes. This applies in particular when we look at groups which are dominated by individuals who (wrt at least some characteristics) belong to the upper or lower end of the distributions. Consider professors of mathematics, convicts, Olympic athletes, … However, even in daily life, the effects can be large. I strongly recommend reading The Bell Curvew, which discusses how a great number of outcomes correlate with an implicit grouping by IQ and how groups (grouped by other criteria) with different average IQs have different outcomes. (Some of La Griffe du Lion’s writingse are also quite good—and available online.) Notably, the more equal opportunity is, the more important group characteristics become for group outcome.

An additional hitch is that not all differences are dominated by individual variation. Notably, the mere existence of special cases does not imply that individual variation is the greater factor. There are many characteristics where the difference between two groups is so large that the group difference dominates. Consider the attribute height and the groups of 5 respectively 10 y.o. children for an uncontroversial example.

Secondly, the equally naive conclusion (or evil strawman?) that those who claim that group X is Q also believe that all individual members of X are Q—or that those who claim that group X is more/less Q than group Y also believe that all individual members of X are more/less Q than all individual members of Y. There is nothing wrong with statements like “Men are taller than women.” or “Ashkenazi are intelligent.”—even if there are great individual variations: The speaker will almost certainly take the existence of exceptions for granted, assume that the reader is intelligent and informed enough to also take them for granted, and consider it a given that the statement refers to group characteristics that need not apply to any specific individual. (The rare exceptions will almost always be clear from context.)

As an aside, a principally different error with very similar consequences is to assume “all quantification” where “existence quantification” (or a middle step) was intended. During my readings of relationship forums a few years ago, e.g., I saw a great number of cases where a male poster wrote something which most likely was a “some”, on the outside a “most”, e.g. “Why do women like a-holes?”—only to be met with a barrage of “Stop generalizing! We are not all like that, you misogynist!”. The point is that an unspecified quantification is not necessarily an “all” (or even a “most”), that it is highly presumptuous to assume an “all” where it is not actually stated, and that great attention to the context must be paid before raising accusations.

Written by michaeleriksson

December 28, 2010 at 4:34 pm