Michael Eriksson's Blog

A Swede in Germany

Archive for January 2022

Some UI problems and other complaints

with 2 comments

The last few weeks have been so horribly frustrating, between construction work, idiot politicians, absurd UI decisions, an extensive (and not smooth-running) laptop installation, a winter depression, and a great number of other annoyances and wastes of my time, that I feel like snapping. Below I try to get rid of at least some of my frustration. (Do not expect a quality text.)

Specifically, UIs will be the topic. The focus will be mostly on “modern” UIs and not always restricted to events of the last few weeks.

New laptop: My old laptop died a few weeks ago, and I have spent considerable time setting up a new one, while switching from Debian to Gentoo.*/** Among the various UI problems encountered, and two which made the early phase horribly frustrating:

*Gentoo has a more sensible approach, with the user more in charge and (by default) fewer conceptually flawed components. Out of the box, I now am rid of e.g. Systemd, PulseAudio, and most of the desktop nonsense. Debian also has a long history of interfering extensively with the “upstream” code of various packages, and often for the worse.

**Note that this has brought many issues that are not “someone’s fault” or UI related, but still contribute to the overall “annoyance load”, including e.g. the need to learn a new package management system or the need to switch window manager, as WMII, which I had used for a few years, is currently not sufficiently supported. (The tentative replacement, Awesome, is good, after some considerable config changes.) Another good example is that my longstanding personal configuration choices were never automatically present, e.g. that any non-configured Bash starts in Emacs-mode instead of Vi-mode. A borderline case is some odd defaults, like the odd insistence to shove a umask of 022 down the user’s throat—as if the average user would like every other user to have the right to read his every document …

The BIOS (UEFI, whatnot) has no generic “boot from USB” or “boot from CD/DVD” setting and it refuses to remember a device-specific one from boot to boot, implying that any (non-harddrive) reboot involves going into the BIOS, selecting the appropriate individual device for boot, and hoping that everything goes as planned—which is not a given, as sometimes even this individual change is ignored, forcing repeated reboots and BIOS visits.

The virtual consoles per default have a horrifyingly annoying blinking cursor. A blinking cursor is, in general, an ill-advised distraction and annoyance, but this one blinked at such a hysterical rate that it was borderline impossible to do any work. The alleged fix, “setterm -blink off” did not work, and I eventually resorted to a “setterm -cursor off”, as having no (!) cursor was better than having this hysterically blinking one. Unfortunately, this had no effect in many of the used tools and use of many tools, e.g. Vim, turned the cursor back on, even after leaving them. Eventually, I included “setterm -cursor off” in the PROMPT_COMMAND, causing it to be automatically executed after each command.

Microwave (Samsung something-or-other): In the long time before I moved “full time” to my apartment, and only spent a weekend now and then in it, I splashed on a microwave with a built-in “regular” oven-function to have a wide range of options even without a kitchen. While I do not remember the price, it was by a considerable distance the most expensive microwave oven that I have ever bought, in part guided by my strong earnings and the wish for some experimentation in price classes.* Apart from the (regular) oven functionality, which is surprisingly** limited, it is also the worst microwave that I have ever owned.

*I habitually to go for cheaper products and have made the experience that breaking habits, every now and then, can be valuable, because I often find something of value, a new insight, a better habit, whatnot. This could well apply even to price-range habits. (However, during my limited experiments, I have tendentially found that more expensive products have worse UIs and worse usability than cheaper ones—and are not always superior in other regards either.)

**Or not: As it runs from a regular wall socket, the power requirements of a regular oven might be too much to be safe, which could explain the restrictions.

Consider:

  1. There is a barrage of one-button controls for this-and-that, e.g. to make popcorn. They have no practical use for me, as I prefer to go by the instructions on the package of the food to cook and as such generic one-size-fits-all attempts tend to give poor results in light of varying quantities, densities, and whatnots. Moreover, as they are icon based, it is often hard to understand what they are supposed to do in the first place. (Generally, I note that if an interface uses English, only those who understand English understand. If it uses icons, no-one understands.)
  2. The traditional (and vastly superior) dials to choose effect* and duration for the microwave function is missing in favor of a multi-step button-clicking: First, choose the microwave function per one button. Second, note that the (unconfigurable) default effect is 900 watt, while most food packages indicate 600 watt. Third, manually reduce the effect. Fourth, wait and wait until the indicator switches from effect to time. Fifth, manually enter the time with several clicks. (And, no, the time is no more saved for the next time around than the effect.) Sixth, start the actual cooking. This takes several times as long as just (if at all needed) turning two dials and pressing the “on” button.

    *Many more traditional microwaves have another problem here: The effect is not given in watt, but indicated with informationless claims like one, two, or three stars. Well, the package says to use 600 watt—how many stars is that supposed to be?

  3. There is a built-in digital clock, the setting of which requires the usual number of steps,* making it a hassle. But: even a very short cut of electricity, e.g. due to a single moment of power failure or a kitchen-internal move, resets the clock to 12:00**. This is the type of cost-saving that I do not expect in a machine of this price. Just adding a small buffer to keep the time for a few minutes would cost next to nothing in comparison to the overall price—and less than the pointless functionality that has been added.

    *I have only done so once, a few years ago, and do not remember the details, but the reader is likely familiar with similar clocks.

    **I.e. noon; as opposed to the more common 00:00, i.e. midnight.

    On the upside, this allows a circumvention of the time setting: Simply wait until noon and unplug the machine, then plug it back in. (Downside: I have to remember to actually do this, and at exactly noon, e.g. as daylight savings time begins or ends, which can lead to a considerable delay, often weeks, before I correct the time.)

  4. The usual alarm to indicate the end of cooking is present. However, where a typical and sensible microwave indicates the end of cooking once and then lets the user handle matters, this one is silent for a while, then signals again, is silent again, signals again, until the user has actually opened the door. Horribly annoying and with, at best, minimal value in return. To boot, the alarm is so loud and shrill that it borders on the painful.

    In due time, I found myself keeping a separate eye on the time, so that I could pre-emptily stop the machine a few seconds before the alarm went off—thereby entirely invalidating the reason for an alarm. (Yes, there is a setting to not ring the alarm at all, but no volume control and no “ring once” setting. Of course, turning the alarm off entirely could legitimately lead to misses on my behalf, as I do not catch it every time and as I can be very focused on other matters.)

Bose 700: To make the various bouts of construction noise easier to survive, I use a pair of Bose 700s, widely considered among the best in noise-cancellation—and bought at a price above 200 Euro. (And even that was with some rebate. The typical list price at the time was 299 Euro, or 300 Euro with sensible rounding.) As far as noise-cancellation goes, they are the best* that I have tried, and the sound reproduction is, at least, among the best. The UI on the other hand is horrifyingly poor. (Some additional negatives, from my personal point of view, arise from the strong focus on mobility, e.g. that the construction is “on-ear” instead of “over-ear”.)

*But note that “best” does not necessarily imply “good”. The field still has a long way to go. For instance, even with simultaneous use of these headphones and ear-plugs, construction noise remains very audible. I would further opine that ear-plugs alone do more than the headphones alone—at a small fraction of the price.

Consider:

  1. The main control for the headphones is intended to be a smartphone app, which limits the users unnecessarily. What if someone does not have a smartphone? What if the battery has run out? What if the smartphone is in another room? Or, as in my case, what if the user knows better than to install various apps from sources likely to abuse the confidence?

    And why not allow the same type of control from a regular computer?

  2. The few “mechanically obvious” controls are too sensitive. Touch the headphones in the wrong place, e.g. when putting them on, taking them off, or making a minor adjustment of position, and something could easily be triggered. I am particular prone to accidentally touch the left-side control for degree of noise-cancellation, which results in a loud and annoying claim of “Five!” and then my noise-cancellation is halved. Two more (deliberate) clicks are now needed to give me “Zero!” and then “Ten!” and a restored cancellation. This is the more absurd, as I never have any use for them outside of the full “Ten!”. They simply are not so good that a reduced setting would be useful. Even if worst came to worst, the user could just remove them from his head, if “Zero!” was what he actually wanted. (Note, e.g., that playing music while at “Zero!” makes little sense, as the music is as likely to cause the user to miss whatever external sound he wanted to hear as the noise-cancellation would have been.)
  3. Volume (and some other things) can be controlled by a touch pad of sorts, located on the front of the right side. This had the disadvantage of being hard to detect—the user is unlikely to even realize that there is a control there unless he reads the instruction manual. (I did, many others do not; and it could be argued that the task at hand is so simple that it should not be needed. The need would, then, be a sign of design failure.) Most of the time, the volume control works well, but, often, it does not. Instead, I am met with a loud and annoying “Boop!” and nothing happens.*

    *Why, I have not yet figured out. It might relate to something like the temperature or dryness of my fingers, that they are recognized as fingers on the one occasion and not on the other. If so, this is so severe a design issue that the touch pad cannot be defended.

    To boot, when I am thinking or relaxing, I often put my lower right arm on my forehead. (Do not ask me why—it just happens.) If I do so when wearing the headphones, my upper arm often comes into contact with the touch pad—and a loud and annoying “Boop!” follows.

  4. Every time that I turn the headphones on, I have to wait for many seconds before I hear a loud and annoying “Bluetooth off!”. Only after this do they work, even when I have them plugged in by wire. (Presumably, some type of Bluetooth search is made, even when a wire is present, which is highly dubious. The loud and annoying announcement does not help.)
  5. The headphones are usable even with an empty battery, assuming a wire and with no noise-cancellation and a worsened sound, just like I am used to. However, this does not apply when charging. When charging, even the use of the turned-off headphones over a wire is not possible! This is a highly annoying and hard to defend restriction, which does not match the behavior of any of the other noise-cancellation headphones that I have owned over the years.
  6. If the battery runs low, the user is pestered with loud, annoying, and poorly pronounced claims of “Battery low! Please charge now!”. These achieve nothing but shortening the use of the headphones, as e.g. any music playing is suppressed in favor of these announcements (and few external disturbances are equally annoying). Note that there is no informational value to them either, as the effective use of the headphones is ended as the messages begin. (In contrast, a single claim of “Battery will run out in twenty* minutes.”, while dubious enough, would at least have the benefit of an advance warning.) What has any other noise-cancellation headphone done so far? Given the user the full run of the battery, after which he has noticed that the battery is empty and charged accordingly. Consider a car that turns it self off before the tank is empty, with a warning that the tank soon will be empty. How would the driver be helped by that?!?

    *I have, for obvious reasons, not checked for how long this annoying message goes on (but it appears to be for some time), and it is likely to be less than twenty minutes; however, twenty minutes might be reasonable for an actual advance warning.

    In doubt, some type of charge indicator per LED would have been much to prefer.

  7. Which brings me to the topic of LEDs. There is an indicator present, but it is so obscure in its semantics as to be near useless. Moreover, when the headphones are charging, we have another case of hysterical blinking. For obvious reasons, this blinking is easier to ignore than the cursor discussed above; however, it is still an unnecessary annoyance, and I often find myself turning the headphones so that the LED is not visible at all, during charging, making the blinking entirely pointless.

Smartphone (Android): If I had kept notes during my rare smartphone uses, I could likely have written a few pages worth on that alone. To give just one example, I often use the smartphone for Internet access for my laptop by USB-tethering. Turning this on the first time was easier said than done. Once on, it turned it self off again and again, every time that I unplugged the USB cable, and sometimes spontaneously during use. In order to make the tethering permanent, which I only found out through an Internet search, I had to enable the developer options and change a setting there. Absolutely inexcusable! To make matters worse, at some point, a few months ago, USB-tethering was suddenly turned off again, despite my not having touched the actual smartphone for days (i.e. no user action could explain this). It turned out that the entire developer options had somehow, spontaneously, reset themselves, and now needed renewed activation.

Sennheiser HD 4.50 BTNC : Earlier today, my Bose headphones were charging and I tried to use an older pair of Sennheiser headphones—for the first time in (likely much) more than six months. I had forgotten the exact use of the controls, and the controls were unmarked. There only seemed to be one candidate for an on/off button, however, based on layout and my vague memories. I pressed this button—nothing happened. I pressed it again—nothing happened. I pressed it for longer—and my soundbar went quiet as the headphones stole an existing Bluetooth connection! This is inexcusable on two counts. Firstly, any control overloading should be done in a natural manner, not mixing “orthogonal” concerns like “headphones on/off” and “pair Bluetooth” or “Bluetooth on/off” (or whatever this might have been). Two separate controls, preferably of the mechanical type, should have been provided.* Secondly, the headphones, the source device, the Bluetooth protocol, or whatever is ultimately to blame, should have respected the existing connection.**

*There were several other controls, none relevant today, which I have never used, because they appear to deal with e.g. volume increase/decrease over Bluetooth and I have virtually always used the headphones connected by wire to my old laptop. (Debian, at least, only supported Bluetooth and sound over the idiotic PulseAudio bullshit—and I was not going to re-infest my computer with said bullshit just to save myself a single small cable.)

**This the more so, as there is a risk of third parties taking something over. I note that I somehow managed to receive someone else’s TV (?) on my soundbar during early and failed attempts to pair it with my computer. (Again, no Bluetooth sound without the PulseAudio bullshit.)

Of course, both this and my attempts to correct the situation were interpunctuated with highly annoying and overly loud “Connection!” and “Lost connection!” from the headphones—not as bad as with the Boses, but really not helpful.

I used to love these headphones: Sound and comfort were both great, the noise-cancellation was very-good-by-the-standard-of-the-day, the UI, while far from perfect, was far better than with my Boses. (An on/off button is really all that is needed.) I would probably still have preferred them, outside of construction-work phases, had the earmuffs not been so worn down. Now, I tore them into pieces. Week in and week out of frustration, I could not take this last straw, and I literally tore them into pieces.

Design advice (very incomplete):

  1. Prefer optical indicators/indications to voice/audible indicators/indications. If you do use something audible, keep the volume at a reasonable level, avoid shrill or unpleasant noises, and make any voices used as natural sounding as possible.
  2. Be cautious with any type of notification and its strength. For example, only use blinking when you have a valid reason to attract the users attention—never for something that merely exists (e.g. a cursor) or to indicate a long-term state with no need of intervention (e.g. that something is charging). More generally, a signal that amounts to “Pay attention to me!” should only be used when and for as long as there is an actual need to pay attention.
  3. Prefer easily recognizable controls over more obscure ones.
  4. Prefer controls with a mechanical effect over a (solely) digital one, e.g. an on/off switch that is pushed between on- and off-positions over a “stateless” button.
  5. Simpler and more generic controls, e.g. microwave dials for time and effect, are usually better than less generic ones, like the one-push buttons or the elaborate choice dialogue described above.

    (Consider, as an analogy, a water tap: Would you rather have a typical modern tap with one control for water flow and one for temperature—or a set of buttons where you can chose, say, nine pre-determined combinations of water flow and temperature? Almost certainly the former.)

  6. Try to design from a user-centric perspective—not a designer-centric one.

    Note, in particular, that what the designer might consider important is not necessarily what the user will consider important, be it in terms of functionality or when notifications are needed. (Cf. above examples or note the case of focus stealing, which can hardly ever be justified.)

  7. Be cautious with experimentation when users might have expectations from similar products. If the users ask for a better horse, give them a better horse first, and investigate the topic of cars second. (Note that there might be quite a few things that a better horse is well-suited to do, but a car is not, like traveling a narrow forest path or handling impossible looking terrain.)
  8. Beware behaviors than can prove annoying over time (let alone immediately). This applies in particular to repeated efforts on behalf of the user (which could have been avoided by more sensible defaults, the ability to change defaults, or similar) and intrusive (e.g. loud or blinking) notifications to the user. (Also see excursion below.) Keep in mind that catching someone who is already at the edge can make even a normally tolerable event cause disproportionate reactions (note my poor Sennheisers above).

    As an aside, there are many analogues to this in other areas. For instance, I would give the two single most important rules of movie/TV/YouTube/whatnot music as 1. No music is always better than bad music, and 2. No music is almost always better than highly repetitive music. (Still, especially on YouTube, bad and repetitive music is very common.)

Excursion on repeating and unsolvable issues vs. anger and frustration:
If we look at humans during many earlier time periods, anger was a constructive and/or helpful reaction to many problems—not restricted to the obvious case of fighting. Consider e.g. moving a fallen tree trunk off someone, pushing a carriage back onto the road, removing a stubborn stone from a field, or similar. If at first you don’t succeed, get angry and try it with more force than available in a calm state. Fail again, get angrier. Even many interpersonal issues, short of a fight, could in some sense benefit from anger, in that the angrier person has a larger chance of getting things his way, e.g. because he creates the impression of being more likely to take a physical fight on the issue. (Note that I am not saying that such “interpersonal” anger would be constructive, in the best interest of the group, or, even, necessarily in the best long-term interest of the individual.)

Such anger has never been without problems, as there is always a trade-off, e.g. in that the carriage pusher increases his injury risk or that someone involved in an interpersonal issue increases the risk of a fight*. However, evolution will have ensured that anger occurs at least approximately when and where it had a net-benefit in terms of expectation value in older times.

*Actually reaching the point of a fight is usually a bad thing, which is why the perceived anger works—if the one party seems willing to take the fight, the second has to think hard about the risks vs. payoffs. Note similar situations among animals, where e.g. a stronger individual might yield in the territory of a weaker individual, or to a female defending her offspring, because the willingness to take a fight is large in the other party.

Now look at the modern world, where other situations often apply. If, e.g., someone has a computer problem, anger will rarely help, because a greater exertion of physical force is more likely to damage the computer than to resolve the problem—and anger makes it harder to think clearly, which is what is really needed. Still, the tendency to anger is still there, and when a certain problem or annoyance repeats again, and again, and again, without anger helping in the least, the anger and (later) frustration is likely to rise rapidly. I am, myself, unusually prone to this issue, but I have e.g. heard many a colleague suddenly type with several times his normal force, spotted him silently (or not so silently) curse over some user-hostile program or MS Windows, or seen him leave his computer to get a cup of coffee* with fire in his eyes on so many occasions that I have no doubt that the problem is wide-spread. (And it seems more likely to hit those highly computer proficient, possibly because they know how much better things could be without the many idiocies and idiotic restrictions of modern UIs, in general, and GUIs, in particular. Many failures to grow angry, here and elsewhere, are not so much based in a cool head as they are in ignorance.)

*A very good idea, as it gives some distance and relaxation, but one surprisingly hard to actually implement, as at least I have a natural urge to continue with the problem until it is resolved.

Similarly, anger in interpersonal situations is more likely to backfire today than in the past, be it because it is less productive or because any actually manifested violence, often even threat of violence, can be punished by the authorities—and not necessarily in an even remotely fair manner. Consider e.g. even the most incompetent and uncooperative civil servant* or customer-service rep: No matter how natural the anger, it will not help, because no actual consequences for the counterpart are likely, protected as he is by semi-anonymity, an often large geographic distance, and, at least for civil servants, a stronger power prepared to defend his incompetence with any means. Worse, any expression of anger, no matter how justified, is likely to antagonize the counter-part, who is, again, protected against consequences, but who might very well be able to cause further problems for the citizen/customer—by now deliberately substandard work, if nothing else.

*I have repeatedly read claims of civil servants being more exposed to threats or whatnots today than in the past. For lack of detail in this reporting, usually restricted to what amounts to “bad citizens harass poor civil servants”, I cannot say much. However, every time that I read something like that, I ask myself how much of the problem actually lies with the citizen and how much with the civil servant and/or his employer—treat citizens like shit and they will grow angry. Many of my own experiences with civil servants have been utterly inexcusable.

Advertisement

Written by michaeleriksson

January 26, 2022 at 6:22 pm

The reserve theory of survival

leave a comment »

As an addendum to some of my texts on COVID (e.g. [1] from earlier today):

In my thinking around survival and longevity, I tend to rely on the “reserve theory [of survival]” or “reserve principle [of survival]”. (For want of a better name. While the idea is reasonably obvious, and likely to have been had by a great many others, I cannot recall ever having encountered it outside my own thinking, and I am not aware of any names other than my own.)

The idea is that we all have a certain reserve in various bodily functions/organs/whatnot that is more than enough to handle a relaxed and unstressed situation, even for most elderly. However, if these reserves are exhausted, either because of an increase in stress or a decrease in the reserves, then we have a problem—possibly, a deadly one. For instance, that very old lady might have enough reserves to e.g. go to the store, but if her handbag is snatched, trying to run down the thief might be too much. Similarly, a bout of flu might reduce her reserves to the point of making even a store visit a life-threatening experience. (Leaving aside whether someone with the flu, irrespective of age, should be running around town.)

These bodily whatnots include factors like heart, lungs, kidneys, liver, … Even someone with e.g. a bad case of liver disease can live in the now, at least with some care, and maybe even reach a respectable age, but his life expectancy is likely to be well short of what it could have been, because further losses of liver function are covered by lesser reserves than they would have been with a healthy liver.

The problem with age is, of course, that various damages, wear and tear, age related deficits, whatnot accumulate over the years, while the state of training tends to worsen. (With the lesson that those things that we can still train with age might be well worth training. It is no coincidence that the active elderly tend to live longer than the inactive.)

A good example is the heart, in a simplified model, where two values, resting heart rate and maximal heart rate, can illustrate the reserve. A sporty teen might have a resting heart rate of 50* beats per minute and a maximal heart rate above 200—a reserve of more than a 150 beats per minute or 300% compared to just resting. Wait until the same person has grown 90 and out of shape, and the same numbers might be 100 and 120—a reserve of 20 beats per minute and 20%. Which incarnation will have a problem with that flu?

*Numbers should be taken with a grain of salt. They are intended for illustration, not medical exactness. (But I do not consider them unreasonable.)

A problem with both the vaccine debate, in as far as risks are at all acknowledged, and medicine in general is that there is little concern for such reserves—the patient survived now and what comes later is not our problem. If that lung cancer patient had one lung removed to successfully remove the cancer, he was “healed”—but what about his life expectancy with that lung gone? How many years of his life might that have cost him? (But note that I am not saying that the decision to operate was faulty. It might very well have been the lesser evil and an objectively correct decision. The point is that there is a difference between truly being healed and being “healed”.)

Now, looking at COVID vaccines: Let us say that someone experience some side-effect, e.g. a heart issue, for some time, and then bounces back. There was no death—so no big deal. Right? But what if the side-effect left a permanent reduction in reserves? A little bit of scarring on the heart muscle, e.g., might not be very dangerous at twenty—but what about the same heart sixty years later? It might, for instance, be the difference between a deadly and an almost deadly heart-attack, because the reserves needed to survive were not there.

(Of course, similar thinking might be needed with COVID, it self, or any other disease.)

An outright disgusting related area is the description of some developmental problems affecting the brain with e.g. “most have a normal intelligence”. The hitch? The word “normal” is taken to imply an IQ above 70, or two standard deviations below the mean. These cases of “normal” intelligence might then have taken a hit of two standard deviations compared to where they “should” have been—a truly massive loss. In individual cases, we might have someone who “should” have been a genius, suffered some mishap, and ended up with an IQ half of what he might have had—-but, because he is still above 70, he is deemed of “normal” intelligence.

Written by michaeleriksson

January 20, 2022 at 5:19 pm

Posted in Uncategorized

Tagged with , , , ,

The intellectually dishonest harming their own causes / Follow-up: various

with one comment

As I have discussed in some earlier texts (e.g. [1]), the problems associated with replacing a fair debate with “Fake news! Fake news!”, censorship, and other anti-intellectual and intellectually dishonest methods can be grave. An interesting issue is that this type of “argumentation” often backfires, both in that the intellectually dishonest lose credibility, much like the boy who cried wolf, and that they will lose an audience for solid arguments (should they exist) resp. the chance to present these arguments.

In particular, those who are strong critical thinkers, know science and logic, are used to think for themselves, etc., are exactly those who tend to be put off by this type of “argumentation”. In contrast, those who fall for it tends to be the gullible, those who cannot or will not think for themselves. (Note the stark contrast with official propaganda around COVID, where, somehow, the gullible are the enlightened and the thinkers in need of “education” or whatnot. Also note [2], where I discuss by own vaccine situation and the issues of intellectually dishonest, sometimes even outright terrifying, propaganda.)

Consider the case of side-effects from the COVID-vaccines. There appear to be two camps over the last year-or-so:

Firstly, the mainstream camp, which claims that side-effects are far too rare to be of concern—and which supports this opinion more with defamation of the other camp than with arguments, statistics, science, whatnot. An important special case is the sometime identification of those skeptical towards the current COVID-vaccines or their use with the older and more general anti-vaccine movement. (Yes, members of the latter are highly likely to be members of the former, but the opposite does not automatically apply and there are legitimate concerns around the COVID-vaccines and their use that are not relevant to the debate on vaccines in general.)

Secondly, the opposing camp, which claims that the side-effects outweigh the benefits for those not in a risk-group, and at least try to support this stance with arguments, statistics, science, whatnot.

This second camp, however, contains a spectrum ranging from those who believe that the risks, while unnecessary and not outweighed by any vaccine benefits, are very small, to those who believe that the risks are very large. (And especially the latter might make further going claims than those mentioned above.)

If (!) the mainstream camp is correct, or at least approximately correct, in that the “very small” side of the opposing spectrum is correct, why not take the debate, clarify the situation, and avoid fears in the population that the “very large” side of the opposing spectrum is correct? Vice versa, if the mainstream camp is incorrect, this should be established as soon as possible, to reduce the risks to the people.

Example: Apparently, a great number of athletes have dropped dead after taking a vaccine and are making the headlines of alternative media,* while being ignored or explained away with (often) weak arguments in mainstream media. Assume that we were instead to perform some type of baseline comparison, to establish whether the aggregate numbers are higher than they normally are and/or whether the rate of death is higher within some time after taking the vaccine than among the unvaccinated. If they are not, this would be a significant (and, for once, legitimate gain) for the mainstream camp; if they are, this should be brought to common knowledge and begin** to influence policy as soon as possible.

*Sometimes, regrettably, after merely dropping dead, with only a speculated connection to the vaccine—neither camp is perfect.

**However, an increase does not automatically give us vaccines as the cause of that increase. The conclusion would be tentative and the correct measure would be to scale back vaccinations (outside risk groups) while further investigations are made.

Similarly, the mainstream camp has pushed a narrative that the unvaccinated would be a threat, would allow the virus to survive*/mutate/become more virulent/whatnot. This usually through argumentation-by-assertion. The opposing camp has the opposite take—that “over-vaccination” creates more dangerous versions of the virus, and that the vaccine is better left to risk groups. This stance is supported by arguments, empirical knowledge about viruses in general,** and reason. Indeed, when it comes to antibiotics, the mainstream stance is (or has historically been) the same—we should use antibiotics with restraint, lest some bacterial strains develop immunity and leave us defenseless. Again: if the mainstream camp has it right, it should take the debate and try to win that debate based on better arguments; if it has it wrong, we must learn this as soon as possible and policy must be adapted.

*A particular perfidious claim as there is no greater chance of exterminating COVID than the flu—even with a fully vaccinated population.

**Where the characteristics of different viruses have to be factored in. As has been noted repeatedly by experts, the characteristics of e.g. the viruses behind smallpox and COVID are very different, making the successful anti-smallpox strategy pointless with COVID. In contrast, COVID does have much in common with the flu, and lessons from major flu epidemics are more valuable.

Speaking for myself, I am genuinely concerned about at some point being forced to take one of the current vaccines. This, and pay attention here, not because I believe that the risks are very large, but because the risks are unknown to me. In particular, as things currently stand, there is no possibility for me to give “informed consent” in any reasonable sense of the phrase—the behavior of the mainstream camp has ensured that I am uninformed*. In contrast, COVID is a known risk—and that risk is very small for me, as I am not a member of a risk group. I would effectively be weighing a known very small risk against an unknown risk somewhere in the range from very small to very large.**

*Note that this is a type of uninformed that differs from that of the ignorant average citizen: I have a considerable, if still layman-level, amount of information on various topics and sub-topics, but they are often subject to great uncertainties and conflicts of “presumed-expert-A says one thing and presumed-expert-B another”—and where clarity cannot be found, because the one side refuses the debate with the other, implying that any single source will present its arguments (or “arguments”) unopposed. (To which, cf. the next footnote, must be added that some items might still be unknown or unknowable even to competent experts.)

**And this just looking at the somewhat near future. In addition, I have seen some raise concern about unknown long-term damage. This is both natural and valid, but it is interesting that the mainstream camp raised early such concerns about COVID, but is now trying to squash any such concerns about the vaccinations. Again, the argumentation is not directed at rational decision making but at increasing COVID fears or avoiding vaccine fears—and never mind the underlying reality.

To this I might add that the incorrectness of claims from the mainstream camp is often indisputable—not merely an issue of something unknowable, a difference in opinion, or similar. For instance, some months back, I read a text where some utter idiot argued that because our school children would be extra super-duper vulnerable to COVID, it would be extra super-duper important to prioritize vaccinations for said school children. However, experiences gathered over roughly two years show indisputably that school children are extremely unlikely to fall victim—either they avoid infection to begin with or the infection, with very, very few exceptions, never moves beyond something trivial. Indeed, school children might be the single age group(s) who are naturally the safest.

On the upside, there seems to be a trend towards more common sense at the moment, including positive claims by the WHO and the U.K., but it is too early to be truly hopeful—and it has yet to make any noticeable change in Germany (where I live).

Written by michaeleriksson

January 20, 2022 at 3:49 pm

Djokovic as GOAT? (III) and COVID distortions

with 3 comments

I have repeatedly mentioned Djokovic as the potential GOAT of tennis, including in at least [1] and [2].

Last time around ([2]), I wrote that:

Should Djokovic add this [2021] year’s U.S. Open, winning the Grand Slam, this would probably close the debate for me. If he does not, I suspect that the developments over the next one or two years will leave the same conclusion. (But let us wait and see.)

While Djokovic was “only” the runner-up, I see it as time to close the books on the current* candidates: Djokovic is the GOAT of at least the Open Era.**

*What future candidates might achieve is yet to see.

**For reasons discussed in older texts, a comparison outside the Open Era is even trickier, but there are precious few candidates that are even on the table as superior, even should we drop the “Open Era” restriction.

This for two reasons:

  1. Djokovic has torn ahead on my main proxy criterion, weeks at number one, and has an overall record in almost any other category that matches the best of the best, including Federer.

    Specifically, he now stands at a massive 356 weeks (Federer 310; no-one else above 300) and counting. This despite being shortchanged 22 weeks due to a COVID freeze.* True number, then, 378 or well over 7 (!) years. (Cf. Wikipedia.)

    *At the time of other texts on the topic, I was under the impression that these weeks had counted in his favor. Note that he would almost certainly have had the same set of weeks at number one even had there been no freeze.

  2. The arbitrary removal of Djokovic from the on-going 2022 French Open makes any future comparison with Federer and Nadal flawed. Djokovic has now missed two majors, in which he would have been the favorite, for reasons external to him.* He has already lost the chance of taking the Grand Slam this year and he risks an unfair and premature end to his time at number one, as he has been given a severe points handicap. Unless one of the other two achieves far more than is currently likely, any edge that they might gain in some criterion (especially, majors won**) would be unfair. This especially should such a gain be made when Djokovic is unfairly absent and would have been favored to win, e.g. a gain through a Nadal*** win at the on-going French Open.

    *To be contrasted with e.g. missing a major due to injury, as there is a trade off—train and compete harder and increase the injury risk or reduce the injury risk and risk less success while healthy.

    **And note that the low usefulness of “majors won” was an early motivation behind my writings on tennis—even before the current situation arose.

    ***Federer is not participating due to an injury.

    Worse, there have been rumors that Djokovic might be prevented from competing at other tournaments too, including the other majors of 2022 or some of the future Australian Open tournaments. I have yet to hear a final word on this, but it would turn a severe distortion into a catastrophic one.

Written by michaeleriksson

January 20, 2022 at 9:15 am