Michael Eriksson's Blog

A Swede in Germany

Archive for June 2018

Why women’s roles have changed

leave a comment »

In a recent text, I had an excursion on moving and an out-dated world view. The first time I entertained such thoughts was in my early years in Germany, specifically concerning opening hours*, and how my lack of a house-wife put me at a disadvantage. In a next step, the observation presented it self that the opening hours could be a hindrance for women who wanted to work (or move from part- to full-time). Used as I was to the Swedish feminists, I even wondered why there were no loud protests requiring that the restrictive “sexist”/“Patriarchal” regulations were loosened.

*While the current opening hours are fairly civilized, excepting Sundays, the situation used to be horrifying. For instance, when I started working in the late 1990s (and lost the flexibility of the student) there was a blanket ban after 8 PM on weekdays, after 4 (!) PM on Saturdays, and during the entire Sunday. To boot, I lived in a small town where even the legal limits were usually not exhausted: Most stores might have closed at 6 resp. 2 PM or less on week- resp. Saturdays. Correspondingly, going shopping after a long workday was often stressful or outright impossible; and Saturdays were almost as bad. I actually often resorted to buying groceries in the morning and going to work correspondingly later—even though this increased the distance to walk considerably. (Instead of just making a short detour on the way from office to apartment, I now had to go from apartment to store, from store to apartment, and then from apartment to office.)

Ruminating on this and a few other recent posts, I have to question how many societal changes in e.g. “gender roles” or opportunities for women actually go back directly* to legislation**, “enlightened attitudes”, whatnot—and how many to a naturally changing environment.

*As with e.g. a law intended to increase equality and as opposed to a law intended to liberalize the market that happens to have a positive side-effect.

**Irrespective of who is to credit or blame for the changes. The common feminist claim that they deserve the credit is usually unwarranted, at least the positive changes typically being the result of a much wider movement, societal tendency, whatnot. (Note that not all changes have been positive. Consider the U.S. “Title IX” in conjuncture with college sports for a negative example.)

Look at e.g. a typical low- or mid-income* household a hundred years ago compared to today: No dish-washer, no washing-machine, no electric iron, no vacuum cleaner, … and consider how much extra work this implied to keep the household in shape and how much less time there was to go to an office or a factory floor. Or consider what was available to purchase at what prices, adding even more work, e.g. to mend clothes that today would just be thrown away, to grind coffee beans, to bake bread, to make meals from scratch, …

*Upper-income households were more likely to have hired help, making the practical burden of work less dependent on such factors. Indeed, with the relative rarity of household servants today, it is not inconceivable that some upper-income households are worse of today, when it comes to household work.

Or take a look at the number of children: A typical modern Western women has her 1.x children. Compare the effort involved, even technology etc. aside, with having three, four, five children*; or consider how the typically more physical work made it harder to be employed when pregnant.

*Or more, depending on when and where we look. One of my great-grandfathers had nine or ten, if I recall my grandmother’s statements correctly. He was likely already unusual by then, but such numbers are not extraordinary if we go back further yet in time.

Or look at the care for others: Daycare for children? At best rare. Severely sick family members? Often still cared for at home. Retirement homes for the previous generations? Unless we count the poor-house—no.

Or consider the types of jobs available: The proportion of the workforce engaging in heavy* manual labor was considerably larger than today (and larger still if we go back a bit further in time). Such work was simply not on the table for the clear majority of women, because they would not be physically able to handle it—and unlike with e.g. modern day firemen, this would have been obvious from day one, not just on that rare occasion when a maximum effort was needed.

*Also note that “heavy” usually had a different meaning from today, including both longer work-days and, like above, fewer helpful tools. Try, e.g., to cut down a tree with a chain-saw and an axe, respectively.

A deeper analysis might reveal quite a few other similar differences between then and now. However, even from the above, it is quite clear that e.g. the relative benefits and opportunity costs of a woman staying at home and going to work were very different from today.

As an aside, there are at least two changes that I have heard given somewhat similar credit in other sources:

Firstly, the birth-control pill, which is given credit* specifically for contributing to the sexual revolution. This, especially when extended to include other contraceptives and more tolerance against abortions, is probably correct. It would also play in with some of the above, because not all pregnancies of the past were wanted and improvements in various forms of birth-control are very likely to have led to fewer children, even assuming unchanged attitudes.

*Whether the sexual revolution is actually a positive is a matter of dispute, but in e.g. feminist discussions it is invariably seen as positive. (My own feelings are a little mixed.)

Secondly, the impact of WWII on female employment (in at least the U.S.): With a lack of available men, women were drawn upon as a source of labor in some “traditionally male” occupations, which in turn gave them a foot in the door for the future and could have indirectly impacted attitudes. On the other hand, that women were used as labor in WWII could be taken as an indication that attitudes were not the problem, but (as above) that roles resulted from a pragmatic use of people where they brought the greater utility—the war might have done less to change attitudes and more to change utility.

Advertisements

Written by michaeleriksson

June 30, 2018 at 10:46 pm

A few points concerning the movie “Anon”

with one comment

I recently watched the movie “Anon”, which follows a police detective working in a police system (and society in general) highly dependent on implants that capture and modify the visual* impressions of the populace—like a mixture of “built-in” smart glasses and some of my own satiric suggestions ([1]).

*I am uncertain to what degree other senses were involved.

While the movie as a whole is not that great, it demonstrates several conceivable future dangers.

Of these the possibly most noteworthy are those present in [1]—or how a state like that could come into being*: Take “smart glasses”, make it an implant, connect it to the cloud, allow the police increasingly greater access to that cloud or even the implants themselves, and a nightmare scenario could very easily manifest it self.

*The movie it self gives no (in universe) historical background; however, the speculation is fairly obvious.

Another issue touched upon repeatedly in my own writings is the low value of digital evidence: Whatever is stored*, transmitted, replayed, …, digitally can be manipulated, usually very easily, in order to give an incorrect impression. This applies not just to obvious items, e.g. entries in the access log of a server or the presence of illegal contents on a private hard-drive, but increasingly extends even to e.g. video capture**. Even the (extraordinarily naive and absolutely intolerable) assumption that law-enforcement personnel would never manipulate evidence is not enough to remedy this problem, nor is the strictest tracking*** by “chain of evidence”, because there is no guarantee that manipulations have not taken place through a third party.

*There is an availability of write-only storage that to some degree could remedy this. However, this presumes that write-only storage actually is used (which can be impractical for e.g. cost reasons and the inability to re-use storage); does not help against manipulations during retrieval of the data; and can be circumvented by simply copying the one write-only storage unit to an identical unit, making only the wanted modifications, and then proclaiming the modified copy to be the original.

**To achieve sufficiently high-quality manipulations or forgeries today is rarely practical. However, at the rate CGI has advanced over the years, we will eventually (likely: soon) reach a point where anyone with even a semi-powerful enemy could be at risk. (Whether we ever reach a state where a single skilled individual can achieve this with at most a few hours work, as implied in the movie, I leave unstated. However, given enough time, that too might be the case.)

***Especially since such tracking would almost certainly be largely digital…

Anonymity and privacy, even outside police work, is another important theme (as might be surmised from the title): Walking along a street and being able to see the names, occupations, whatnot of the other pedestrians might be interesting and useful—but the same applies in reverse. I, myself, certainly would not be comfortable with that. Extrapolate it a bit further, and assume that (drawing on the current U.S.) someone who once was caught peeing in the park has a “sex offender” sign displayed over his head, or that (drawing on Nazi-Germany) Jews, homosexuals, whatnot come with their own warning signs. What if a direct connection with e.g. a Facebook account is made, and passers-by can extract almost arbitrary information, e.g relationship status, at will? Recall e.g. a recent assault over a mistaken identity; or note how easy it is for someone rooting for the wrong team or supporting the wrong party to be beaten up, if encountering the wrong crowd—or consider how information on income can affect the risk of being robbed or pick-pocketed.

From another perspective, consider the ability to replay the capture of previous sights—including e.g. love making. We could argue that that which we have once seen should be ours to see again—and I would mostly agree. However, it is easy to find special cases where this is highly disputable, e.g. when someone accidentally walks in on someone else who is having sex or otherwise being naked: It would not be unreasonable for the observed party to demand a deletion. Certainly, a kept recording might give far greater opportunity of observing details than the original (typically) brief flash. Similarly, there is a wide consensus that filming sex with a partner without consent is unacceptable—but what happens when everyone has a built-in camera? To boot, others can wish for even stricter criteria—I have, e.g., seen the opinion (but disagree) that even consensually filmed material must be destroyed after a break-up or that voluntarily given intimate images must be returned.

These problems are by no means limited to physical acts and nakedness: Consider e.g. the ban on cameras (including on cell-phones and notebooks) in many offices and factories. Or consider someone having a private conversation on which a third-party can now far more easily listen in*.

*An early scene showed even the near-inaudible dialogue of some passers-by being translated directly to text.

Alternatively, consider the invasion of privacy implied by a spouse’s or parent’s request to see a certain section of recording (“Where were you last night?!?”)*: Show it and lose privacy; do not show it and the worst will be suspected. (A similar situation is discussed in a text on lies under oath.) An interesting twist is provided by two (real life) parents who are repeatedly in the news for trying to get access to a deceased daughter’s Facebook account: What if this scenario is replaced by parents/spouses/children/whatnot who gain access to their deceased children’s/spouses’/parents’/whatnot implant data, including extensive recordings?

*It is my strong personal belief that even children relative their parents and spouses relative each other have a right to a considerably degree of privacy; however, even those who do not (e.g. an over-protective parent or a wife who fails to understand that the members of a couple are still different people) must realize that there can be areas where a legitimate need for such privacy can exist: Not everything that the one party wants to keep secret is necessarily harmful to the other, morally wrong, or susceptible to the (pseudo-)argument “the innocent have nothing to fear”. Consider e.g. a husband giving a female friend some help strictly for reasons of friendship, and a wife who has a history of jumping to (incorrect) conclusions about cheating.

Then again, we have anonymity (respectively lack thereof) in the frame of police work. I have earlier (notably in [2]) objected to e.g. computer searches for reasons like the presence of highly personal material and private information, as well as the risk that material that in theory would only be accessed by the police might leak out. What if the information collected includes basically everything seen or done by someone? (Including sex acts, intimate conversations, confidential business meetings, …)

Then there is the issue of hacking and security: Not only does this provide yet another channel through which private information can leak, but it also adds the risk of damaging interventions. For instance, the movie showed examples of visual input being sufficiently manipulated, in real time, that the victim could not rely on his eye sight. With this level of technology, it would be easy to e.g. have someone just walk into oncoming traffic. However, even with abilities more realistic by today’s standards, great harm can be caused, e.g. by having textual information altered to imply that another party is sleeping with the own spouse. Looking at self-driving cars, with similar vulnerabilities and a greater current realism, we could have a hostile entity manipulate a car into taking actions that lead to a car crash, a run-over pedestrian, or some other calamity. (See also e.g. [3].)

On the other hand, if external access is technically and legally sufficiently limited, there can be a great upside to some of the technologies. Consider e.g. re-running a business meeting or a lecture to refresh a failing memory; re-living an enjoyable moment; or (most enticing to me) re-visiting a portion of prior life to have another look at how things were back then or how one has developed or not developed, what lessons can be drawn and what could have been done differently, etc.

As an aside, it is depressing that while we live in a time when privacy and anonymity are more urgent than ever before (for the simple reason that they are so much easier to violate), legislation and other “government behavior” shows a broad trend towards weakening both. The fear of terrorism and organized crime makes this partially understandable; but not only do the “big bads” have far greater means to circumvent such legislation than the average citizen, the measures are often obviously intended against crimes of any kind. Both these factors point strongly towards the damage done being greater than the benefits gained. What we need is the reverse trend—and this not only with regard to the government, but also to strengthen protection against e.g. profile-building private enterprises, for instance by making it possible to order even physical to-be-delivered goods (close to) anonymously and by removing antiquated laws like the German requirement for a hotel guest to register with full and real name and address.

Written by michaeleriksson

June 30, 2018 at 12:17 am

Posted in Uncategorized

Tagged with , , , ,

German World-Cup debacle / Follow-up: Poor decision making

leave a comment »

As I wrote recently concerning the World-Cup game between Sweden and Germany:

As can be seen, the correct decision/the ideal outcome of the game cannot be determined without information that is not knowable at this time. In effect, this game is “unrootable” for me, and the interesting events will largely take place in the respective last of the three games each team plays.

(What is the most likely scenario? Well, on paper Germany is a clear favorite against Sweden, and will likely end up second in the group, behind Mexico, while Sweden and South-Korea are eliminated. How it plays out in real life is yet to be seen.)

By now I am left almost shocked: Germany very, very barely defeated the Swedes, on overtime—but…

…Sweden won the group and Germany ended up last(!), something almost inconceivable before the tournament.*

*Germany is not only the defending champion, but also came of a long line of good results (until the last few preparation games, where things started to fall apart).

That Sweden, in today’s third round, could beat Mexico was not entirely unexpected, after the two teams having similar showings against Germany* and South-Korea; however, the 3–0 was highly surprising. Germany, in turn, should have beaten South-Korea, and that would have made the dream scenario of both “my” teams advancing come true—a scenario that seemed quite unlikely after the initial German failure against Mexico. Unfortunately, Germany continued a trend of not actually scoring, even being the superior team in terms of play; the result at 90 minutes was 0–0; and during the desperate** German attacks during stoppage time, South-Korea scored twice…

*While Mexico won, it was mostly a matter of luck; while Sweden lost, it was on a last minute action during stoppage time.

**Even a one goal victory, given the Sweden–Mexico result, would have been enough for advancement; any non-victory implied non-advancement.

This leaves us with an absurd situation, with Germany trailing even South-Korea and having the likely worst result in its World-Cup history—despite beating the eventual group winner and despite being the better team in all three of its matches.

As for Sweden, I note that it has now been part of the elimination of three supposedly strong teams: After beating out the Netherlands (2014/“reigning” World-Cup third-placer) for second place in its qualification group, multiple champion Italy was beaten in a play-off, and now reigning champion Germany is gone too. (Although, admittedly, Sweden was the only team not to beat the Germans…)

An interesting observation is that Sweden has mounted reasonably successful teams before and after the national-team career of Zlatan, but not during it. Unlike some other greats, he never really delivered in the national team, and the depth behind him appears to have been lower than before and after.

Written by michaeleriksson

June 27, 2018 at 5:39 pm

Follow-up: Poor decision making

with one comment

Sometimes it is impossible to make a good decision, even with a sound attitude and complete knowledge of the knowable facts. My current dilemma regarding the FIFA World Cup is a good example*:

*Except in as far as following sports and/or basing team favoritism on factors like nationality can be considered irrational: I could always decide to not care at all…

My native Sweden and my adopted Germany are in the same group and will presently play each other. Which team should I root for?

In the four-team group with two teams advancing, the ideal would be that both “my” teams advance. However, this is tricky: Germany unexpectedly* lost against Mexico, while Sweden expectedly won against South-Korea. Should Mexico (as expected) win against South-Korea**, the only way to get both teams in would be for Germany to win both its remaining matches (including against Sweden), Sweden to win against Mexico, and having the goal difference play out fortunately between the three 6-point teams. More likely, considering the presumed weakness of the South-Koreans, one of the teams will make it and the other fail.

*In such statements, I go by the official seeding, which had Germany 1st, Mexico 2nd, Sweden 3rd, and South-Korea 4th.

**The match was completed during my writing: Mexico did win.

In that scenario, however, which team would I rather see advancing? Sweden is a little bit closer to my heart, but Germany has a far better chance at being successful in the knock-out phase. What if Sweden were to advance, only to lose immediately in the next first knock-out round?

Also, if Sweden* is the team that advances, it would beneficial to be the group victor. For that to happen, a loss against Germany would be very problematic. On the other hand, if Sweden were to beat Germany, the dream scenario of both teams advancing is pretty much ruled out. A draw is not that good either, because it (a) means that only two points (instead of three for a victory) are awarded, (b) could lead to Sweden simultaneously missing the group victory and Germany being eliminated.

*Ditto for Germany, but their chances of a group victory are currently smaller.

Then again, should South-Korea upset Mexico, we have a completely different set of scenarios to look into.

As can be seen, the correct decision/the ideal outcome of the game cannot be determined without information that is not knowable at this time. In effect, this game is “unrootable” for me, and the interesting events will largely take place in the respective last of the three games each team plays.

(What is the most likely scenario? Well, on paper Germany is a clear favorite against Sweden, and will likely end up second in the group, behind Mexico, while Sweden and South-Korea are eliminated. How it plays out in real life is yet to be seen.)

Written by michaeleriksson

June 23, 2018 at 7:13 pm

Some problems with Wikipedia

with 2 comments

Preamble:
On several occasions, I have started a text on problems with Wikipedia. With too many items to discuss, I have always lost interest before I was even half-done. To finally get something published on the matter, I have taken my last attempt (from May 2017), struck out a few keywords-that-should-have-been-expanded-upon, and polished it a little (including a few links to more recent posts); I acknowledge that more specific examples would be helpful, but, with the intervening time, the old examples are long gone. The result is the rest of this text.

I have long been a great fan, avid reader, and sometime editor of Wikipedia. For at least some part of its history, I have considered Wikipedia worth more than the rest of the Web* together. The more painful it has been for me to see the gradual degeneration of Wikipedia, the loss of the encyclopedic ideal, and the takeover by politically and ideologically motivated editors in many subject areas. Below I will discuss some of the current problems, many of which could be explained by a drift** in authorship from people of over-average intelligence, from an academic background, and/or with more technical interests to members of the broader masses.

*Here it pays to make the distinction between Web and Internet. Email, e.g., is part of the latter but not the former.

**A similar drift explains many of the things that have changed for the worse on the Internet, in the Open-Source community, and similar, over time. Indeed, the people who dominated the Internet when I first encountered it (1994) form a very small minority by now. Also see a post expanding on these thoughts.

  1. A great proliferation of “popular culture”* sections that bring little or no value, but waste space and the editors energy. Why on Earth does every nit-wit who has just heard topic X briefly referenced on “Family Guy” or “The Simpsons” feel the need to add this to the Wikipedia article on X?!?

    *The section heading is almost always a variation of, or contains, “popular culture”. The contents are not always compatible with this and I use this phrase without implying that it is well chosen.

    Discussion of films and books referencing X can have a valid place, but this requires some degree of significance of the work, that the presence of X in the work is considerable, and that it actually brings some added value. In rare cases, there might be reason to include some mention to show that X has had an influence on popular culture, but it is then almost always better to just state that it had an influence, rather than to attempt to list every single TV episode that made a fleeting reference. For an article on Mozart, e.g., the movie “Amadeus” is highly relevant; his (hypothetical) appearance on an episode of “Family Guy” is not; that a character on some-TV-show-or-other liked his music is utterly irrelevant and unencyclopedic.

  2. A decrease in the quality of language. This includes a drop in register and increasing deviations from encyclopedic language in favor of journalistic language; and a number of more detailed issues. (See excursion at the end.)
  3. A PC/feminist/whatnot influence, which includes spurious or irrelevant references to social constructs; uncritical support of “equality of outcome” as something positive, or an implicit assumption that any difference in outcome is caused by a lack of “equality of opportunity”; every second article having a section of feminism’s role for the topic, the feminist take on an topic, whatnot*; categorical and unscientific denial that races exist, even in articles where the claim has no obvious purpose or relevance; …

    *Notably, the relevance of specifically feminism is in most cases so low that another dozen movements and whatnots would have an equal right to be included—but rarely are. A particular idiocy is “feminist film analysis”, which does not even appear to be an honest attempt at applying a certain perspective and methodology on film analysis, instead giving every impression of just trying to extend the pseudo-science of “gender studies” to a new area.

  4. The common application of “feminist” to persons who lived before the term was invented and who e.g. have written on women’s issues, from a woman’s perspective, or using “strong women”; have been involved with some part of the women’s rights movement or expressed opinions in that direction; or even just have been successful women. Some of these might have identified with feminism; others would not have. Associating them with this poison of the mind in such a blanket manner is insulting, stupid, and/or intellectually dishonest (depending on the motivations). Giving feminism pseudo-credibility by claiming that historical persons were feminists is just atrocious.

    To put it down plain and simply: Someone who likes strong women is NOT automatically a feminist. Someone who wants men and women to be equal in rights and responsibilities is NOT automatically a feminist—indeed, more likely than not, a random feminist will not believe in true equality. Someone who thinks that women should have the right to vote is NOT automatically a feminist. Etc. To claim any or all of this is the equivalent of saying that anyone who wants to improve living conditions for the poor is a socialist or that all communists are revolutionaries.

    See also a recent post covering similar ground.

  5. Undue reliance on and abuse of the principle of citability:

    One of the core principles of Wikipedia, and originally a very good thing, is that Wikipedia should only reflect what can be supported by reputable sources—not personal conviction, rumors, speculation, or even (possibly faulty) synthesis of facts by the editors. This, in theory, should keep the articles low* in bias, speculation, pseudo-science, mere personal opinion, …

    *A complete absence is likely unreachable.

    With time, this principle has turned out to be very vulnerable to both abuse and incompetence, and it fails whole-sale in areas where pseudo- or proto-sciences (and other fields of “expertise”) are dominated by true believers, ideology, and similar. Gender studies is the paramount, but, unfortunately, not only example. Because there is no equivalent of astronomy, the “astrologers” are the only ones citable and the articles are, by analogy, filled with astrology instead of astronomy.

    Even in areas where there is no such dominating streak of “astrology”, problems are quit common, notably that individual editors refuse to reconsider a claim which is based on a single, often low quality, source; fail to realize that the opinion of the source is not sufficiently supported that it can be viewed as fact; or similar. The use of sources that either do not satisfy Wikipedia’s criteria for good sources or does-but-should-not (e.g. news-paper articles) is abundant. Changes in what main-stream science consider correct are not necessarily reflected and it is not uncommon that scientists with no or minor expertise in a subject area are used as authorities (most of the, absurdly many, references to Stephen J. Gould are unwarranted for this reason—to the point that a blanket ban on using his name might be a good idea).

    This well-intentioned principle can in the end be what kills Wikipedia, and I see it as a near necessity that Wikipedia (a) is more explicit on the difference between fact and opinion*, (b) forces a higher degree of plausibility checking and culling of sources**, e.g. by excluding news papers and other journalistic reporting for material that cannot be considered “current events”. With regard to (a), I also note that a good encyclopedia considers the risk that what is considered fact today can turn out to be wrong tomorrow, e.g. because a better scientific theory appears or because a higher court over-turns a conviction***.

    *Consider e.g. the difference between “The sun is a black hole[1].”, “According to Meyer[1], the sun is a black hole”, “Meyer[1] represents the minority view that the sun is a black hole; main stream science considers it a star[2][3][4]”.

    **However, great care must be taken. It is very easy for such checks to degenerate into “agrees with me, OK; does not agree with me, not OK”. Criteria could include strength of source; whether the claims are logically consistent; whether a claim is actually supported by the source it self or just taken over from another source (note how “facts” are sometimes uncritically propagated in feminist propaganda with no-one knowing when and where the claim originated, cf. the Woozle effect); whether the claim is actually present in the source; to what degree the claim represents personal opinion/speculation by the editor, respectively has a support in the scientific community. (Where “support” need not imply even a majority opinion, but should go beyond rare fringe views.)

    ***Generally, I urge everyone, not just Wikipedia editors, to prefer formulations like “murder convict” over “convicted murderer”, unless the level of evidence goes considerably beyond even “reasonable doubt”.

    This becomes particularly problematic when a given article has one or several self-appointed “owners” and these have a strong opinion. The result is typically that controversy* is not discussed in the article, often hidden behind a flawed consensus among the editors** of the article, and attempts to introduce alternate view-points often result in these being deleted and another several references being tacked on to the view point pushed by the self-appointed owners. (This can result in single statements having between half and a whole dozen references—without the reader having any greater certainty about the correctness of the claim.)

    *This is a scenario to be contrasted with the anti-evolutionary mantra “teach the controversy”: For evolution, there is no controversy within science, only in e.g. U.S. politics and popular opinion. The cases I discuss are often quite the reverse, with an existing disagreement between scientists or even branches of science, whereas the alleged consensus is often political, ideological, “popular”, whatnot. Worse, the latter type of consensus is sometimes allowed to trump a contradictory scientific consensus… Unfortunately, I kept no specific examples when writing the original version of this text; however, a typical generic example (not necessarily present on Wikipedia) of such a reversal is I.Q., where main-stream politicians and “enlightened” citizens “know” that I.Q. only measures how well someone can do an I.Q. test and that I.Q. tests are flawed, evil, and “culturally biased” to begin with—something entirely at odds with what science says on the topic.

    **As opposed to the “consensus among scientists”.

  6. Increased use of animations instead of individual images to illustrate processes. Individual images are usually the superior choice for illustration, seeing that the users can jump back-and-forth as they please and can take their time or not. In addition, animations are highly destructive when trying to enjoy other parts of the page—like trying to read a book when someone waves a hand in front of the page once every second or so. With (at least) earlier browser/computers and some forms of animation (notably Flash) this also meant a considerable performance drain, especially for users of tabbed browsing. (I regularly have dozens of tabs open for days or weeks.) Unsurprisingly, to compensate, many users prefer to disable animations entirely, and these then have the problem that the animations are reduced to a single individual image with little or no value.
  7. There are a number of issues on the technical side, including search terms being replaced by something entirely different when Wikipedia presumes to know better than the user what he should search for, use of page transitions (evil, should never have been invented), and highly intrusive* requests for donations when JavaScript is activated**.

    *I have no objection against a valuable “pro bono” service asking for donations from its users; however, when it does so in a manner that dominates the page and comes close to an optical slap in the face, well, that is something very different. If nothing else, this type of behavior likely makes people less likely to donate…

    **This was true around the time I made notes for the first version of this text. I have not verified whether it still is. (I rarely allow JavaScript, in general; and doing so where anyone can add content, including malicious JavaScript code, would be quite dangerous.)

In addition, there are a few problems present that have been with Wikipedia from the beginning (and which I therefore do not discuss above), including a very poor or non-existent coordination between different language versions and that very many U.S. editors fail to understand that en.wikipedia.org is the English language Wikipedia—not the U.S. national Wikipedia. The least of the problems that arise from this is the constant abuse of “American” to mean “U.S.” (with variations). Others include writing from a purely U.S. perspective on an issue (including ignoring legal differences between different countries), not mentioning that a particular claim pertains only to the U.S. (including such bloopers like mentioning “the Department of Justice”, in a generic context, without specifying that the U.S. Department of Justice is intended), giving strictly U.S. pronunciations for English words commonly used in the rest of the English speaking world, etc. On a few rare occasions, I have actually seen articles use formulations like “this country” in manner that implies that both editor and reader are in the U.S.

Articles on movies and books tend to be of a particularly low quality, including being filled with personal speculation* about motives, events, and implications. An ever recurring special case is to claim that a character died, lay dying, fell to his death, or similar, even when the work described leaves the eventual outcome unstated. In many or most cases, death is indeed the most plausible interpretation, but drawing such conclusions is not the role of an encyclopedia—especially since they are quite often incorrect: Fiction is full of characters who appear to die only to come back at an (in)opportune moment.

*Barring explicit claims by the creators of the work, any interpretation is speculative. This is one area where even a reference to a credible source does not alter the irrelevance of the interpretation to an encyclopedia. (Still, personal speculation by the article’s editors is worse.) In fact, even when the creators do make a claim or have a very clear intention, some caution can be needed, as can be seen e.g. by the “death” of Sherlock Holmes—protests by readers famously forced Arthur Conan Doyle to revive him…

Excursion on language and related issues:
With the great number of editors, each with their own weak spots, it is impossible to give an even remotely complete list. However, the following are disturbingly common:

  1. An incessant use of constructs like “being X, he Y-ed”*. Hypothetical examples include “Born in Swaziland, he studied to be a surgeon.” and “A natural red-head, he was sued for malpractice.”, typically with a similar low degree of connection between the two parts of the statement. In the rare cases where a strong connection is present, they are more acceptable, e.g. “Going blind, he was forced to give up surgery.”; however, even then other formulations are usually preferable.

    *I am unaware of an actual name for this type of construct.

    As for the reason for these ugly formulations, I would speculate that we, by now, simply have editors following a hype or trying to use “cool” way of writing, without considering factors like coherence and understandability. Part of the earlier cases could have come from either (unfortunate?) moves of insertions* or the spurious removal of words** in a misguided attempt to shorten the text***.

    *E.g. turning “X, an experienced surgeon, was skilled in anatomy.” into “An experienced surgeon, X was skilled in anatomy.”, which to some degree misses the point of an insertion, and which would be better solved by the original from the next footnote. Note that the variation “X, as an experienced surgeon, was skilled in anatomy.” is also an acceptable starting point, and might be preferable for having a greater internal connection—and would lead to exactly the original of the next footnote. (The greater internal connection can be seen by comparing e.g. “X, a young Frenchman, was skilled in anatomy.” with “X, as a young Frenchman, was skilled in anatomy.”: The latter implies a connection which simply is not warranted, while the former combines the same statements without implying a connection.)

    **E.g. turning “As an experienced surgeon, X was skilled in anatomy.” into “An experienced surgeon, X was skilled in anatomy.”, which makes the the sentence harder to understand and increases the risk of any additional error distorting the meaning.

    ***An older text on my own lack of brevity contains some words on “The Elements of Style” and its corresponding recommendation.

  2. Annoying and (depending on the intentions of the editor) possibly offensive use of “their” and “they” to indicate the third-person singular, even in cases when it introduces ambiguity. Cf. another recent text.
  3. Endless repetitions of “then [s]he”, “[s]he also”, and similar in biographic articles, e.g. to list various movies in which someone acted. (Many biographic articles give the impression that the editor is a high-school dropout.)
  4. Insisting on putting things in prose that would be better handled by a list or a table. (Overlapping with the previous item: A formal list* would have eliminated the awkward formulations.) Annoyingly, there is even a tag that suggests that this-or-that would be better off as prose, which is usually applied to perfectly legitimate lists and tables, while there is no corresponding tag for prose that should be moved to a list or table. (Or there is one that is never used…)

    *E.g. what is created by the UL or OL HTML-tags. Note that I do not necessarily suggest the type of use that I often resort to on WordPress. (My use is partially driven by not wanting to use regular headings/Hx-tags within WordPress, where the effects are not under my control; however, in other contexts, headings are often the better alternative.)

  5. Insertion of a colon (“:”) where it does not belong, e.g. “Examples of names include: Jack, Jill, and Spot”, where “include” implies that no colon should be present, the correct version being “Examples of names include Jack, Jill, and Spot”. Similarly, there should be no colon after words like “are”. In contrast, “Examples of names: Jack, Jill, Spot” would use the colon correctly.
  6. Use of “may” as a replacement for “can” and “might”, apparently under the assumption that it is “fancier”. There is a difference in meaning between the three: “may” implies a permission, “can” an ability, and “might” a possibility.* They should not be used in each others stead unless the difference in meaning is sufficiently small in the given context.**

    *For instance, someone who “can come to visit” has the ability to do so, himself being the limiting factor; someone who “may come to visit” has been allowed to do so, the host (usually) being the limiting factor; someone who “might come to visit” is still waiting to make a final decision.

    **But I admit to not always doing so perfectly myself. I was particularly prone to replace “might” with “may” when I was younger. However, note that I am not complaining about the odd error here and there—some Wikipedia articles appear to use “may” as the sole word for all three functions.

Written by michaeleriksson

June 22, 2018 at 9:36 pm

Poor decision making

with one comment

Poor decision making, especially decision making based on faulty information or flawed criteria, permeates modern* society.

*Often non-modern society too—the central problem is likely the flaws of humanity. However, in the past some decisions (e.g. concerning marriage) might have been more rational, and often there was a lack of choice that prohibited faulty decisions: The son of the blacksmith might have been happier as a shoemaker, but his becoming another blacksmith was often a foregone conclusion. (A lack of choice, obviously, can have more disadvantages than advantages…)

To look at a few examples:

  1. Employment: As I have seen again and again, for an employee to judge whether he will be happy with a certain employer and vice versa for the employer, is virtually impossible using the criteria that normally lead up to the employment (interview, CV/cover letter resp. company presentation, general reputation, …) True, it is often possible to filter out poor candidates at an early stage, but even this comes at a high risk of false negatives—people/companies who are filtered out despite that they would have proved a good match, had the attempt been made. More importantly, it is quite hard to tell the difference between an adequate, a good, and a very good candidate (people who claim to be able to are usually wrong).

    For instance, what if a prospective employee is dazzled by big promises* and a high salary, only to find that his colleagues are hard to work with, that he is stuck in a noisy “open plan” office**, or that the tools provided for his tasks are infuriatingly inefficient? When I consider a new project, I make a point of at least looking at the offices, checking out the main software used, and (if possible) exchange a few words with a regular employee in the absence of the interview team, to have at least some indication of the general mood. However, even so, the first few weeks of actual work can change the original impression massively—and the correlation between the original impression and the final is often weak.

    *Even assuming that these are truthful, which is by no means a given.

    **These being an another example of poor decision making through poor priorities: I cannot rule out that there are fields or positions where they bring value, especially where salaries are low and little thinking required, but when we look at e.g. software development, they are disastrous. Here we have decision makers seeing a small savings in office costs and failing to consider the negative impact on the work environment and the productivity of the staff. (Exactly how many people in one office is practical will depend on a number of issues, including the distance between individuals, the loudness of the work, the degree of consideration shown for each other, …; however, going beyond four is rarely a good idea, and when there is little need for interaction or the offices are small, lower numbers are better.)

    In reverse, having a great CV and being good at self-presentation is no guarantee for a good work performance. Indeed, self-presentation might even correlate negatively with performance. Even an apparently solid education brings comparatively little information with today’s grade and degree inflation, especially in the softer fields.* References and the like are sometimes based more in personal liking and contacts than in truth; and, in Germany, it is almost always forbidden to say something negative about the former employee, which has lead to a complicated set of euphemistic codes that even HR staff often misunderstands… Worse, some employers actually allow the employee to write his own reference, to his own preferences, and then sign it as is—this way the risk of being sued for a poor reference is removed… More can be gained through probing the attitude** and problem-solving skills/intelligence/whatnot of the applicant, but this is rarely done; I.Q. tests, a very valuable source of information, are basically never used, be it through PC prejudices or (as in the U.S.) unfortunate legislation or legal precedent.

    *Note that the main benefit of a higher degree was not to demonstrate acquired knowledge but to filter for innate ability, especially intelligence. This filter effect has been diminished severely over time. (Fields, like medicine, where a high degree of raw knowledge must be present, form an exception to this.)

    **Doing this can be hard when we speak of generally attitudes, like industriousness. (Anyone can claim to be this-or-that.) However, answers concerning more specific attitudes tend to reflect the truth, and can provide valuable information. For instance, in software development, in can really pay to give a few non-leading questions to probe what the applicant thinks of code quality (good), pre-mature optimization (bad), or copy-and-paste vs. abstraction and re-use (former very bad; latter usually good), …

    An almost paradoxical standard question during interviews is “Why do you want to work here?”, which leaves the non-naive applicant in an uncomfortable spot, forcing him to either lie or tell a truth that might damage his chances. Those who can truthfully give a sufficiently complimentary answer to this question are either highly naive or in an unusual situation (say, having previously had an internship at the same company). A true answer by a non-naive applicant might be “I have to work somewhere and based on my preliminary research, you have given a sufficiently good impression that I am willing to give you a shot.”, which could very easily result in a “Thank you for your time. Don’t call us; we’ll call you.” as an immediate reaction. Worse, a truthful answer might amount to “either here or unemployment”…

    As an aside, an exception to this can include cases where someone has a “calling” into a certain field (e.g. religion or medicine) and the options in that field are limited. My mother, e.g., wanted to work as a priest; in Sweden, this came close to necessitating employment with the state church. Even here, however, it is not normally the employer that has drawing power—it is the field. To boot, the assumptions about the field are often naive to begin with.

    See also e.g. an older text on application paradoxes.

  2. U.S. college applications are similar to employment. If we look at it from the college’s point of view, we have criteria like grades, suffering from severe inflation (and where top colleges can have more 4.0s than they can accept); recommendation letters that amount to who knows whom or who was liked by whom; extra curriculars that often demonstrate nothing more than a willingness to work* or even an over-reliance on school for activities and an inability get by without “organized fun”; “AP” classes that mostly compensate for the weakness of the “regular” high-school classes (and also underlie inflation); and the feared essay, the evaluation of which is extremely arbitrary and subjective, especially when the “right” opinions can come into play, which unduly favors those with an ability to write well, and which gives posers an advantage over non-posers. Then there is the whole personal connection, “my parents are alumni”, etc., which says very, very little about the suitability of the prospective student.

    *Especially on TV, where the main reason to have extra curriculars is exactly to get into college… (As opposed to a genuine interest in a certain area.)

    Today, the best shot at a good selection appears to be the SATs (and similar tests) that try to assess scholastic aptitude, but even here the value of the test has, possibly by design, grown weaker over time—and there appears to be a trend for colleges to not require it anymore… Cf. also a discussion of test vs. grades.

    How to do it better? Reverse grade inflation, strengthen the SATs (especially through an increased “g loading”), and look at some combination of grades and SATs; forget about recommendation letters and all other bull-shit. The Swedish system, while far from perfect, does a reasonable job of the combination. (But not at suppressing grade inflation and keeping the counter-part to the SATs, Högskoleprovet, at a high quality.) A particular benefit is that the admittance system is almost entirely centralized and involves far less effort on behalf of both students and colleges. My own application consisted of filling out a single form, indicating what schools and programs I prioritized how. Compare this to having to write essays, gather recommendation letters, whatnot, and send them to several or many individual colleges …

  3. Politics is an area wrought with problems, including that many or most politicians deliberately try to mislead the voters in order to be elected, some resorting to populism, some to scares, some to empty promises, some to outright lies, … This leaves the voters who have the brains and the attitude to make a good decision in the situation that they cannot, for lack of information. At the same time, the voters who lack in these regards are fooled into making the decisions that the politicians want to see (i.e. their own election).

    Worse, the elected politicians themselves are then confronted with decisions for which they often lack the intelligence or education; where lobbyists, non-neutrals, civil servants, whatnot provide flawed information; and where the decisions are often governed by the wish to be re-elected, rather than by doing what is right for the country or keeping the trust placed by the voters of the previous election. A notable complication is that promotion to a certain office in a government/administration/cabinet is often not based on expertise in the right area, but on importance within the party, previous experience as an office holder, “years of service”, … Looking e.g. at Sweden, the match between ministerial portfolio and experience/education rarely exceeds that of a random choice.

    See e.g. [1] for a longer treatment of problems with democracy.

  4. Business–consumer relationships are quite similar to the first half of the previous item: The consumers are to be tricked into becoming customers by any means necessary, including emotional manipulation, misleading product claims, the hiding of vital information in the fine-print, … Again, those with a brain have too little (or flawed) information; again, those without one fall prey.
  5. Marriage and other long-term relationships is possibly one of the most interesting areas, where there could be much to gain through rethinking the current Western approach. (But I note and fully acknowledge that this item is speculative.)

    For starters, most relationships are based on superficialities like physical attraction, shared interests, or even just opportunity. In the short-term*, this is not a problem, but when something more long-term is called for, is the filtering improved sufficiently? More likely than not, aspects like a romantic love (cf. below), habit, convenience, the sunk-cost fallacy, or a fear to be alone carries a relationship into a much deeper territory than it deserves.

    *By which I here imply something that has developed into an actual relationship, as opposed to e.g. just “dating”, but which can still be shallow and definitely is comparatively short in length. (Possibly between a few months and a year.) A differentiation between dating and short-term relationship is important in as far as the former usually contains a considerable amount of probing and deliberation, while the latter does not. In a rough analogy, the one is the application and interview phase preceding an employment, the other is the early days of the employment it self.

    Then there is the question of love, which proverbial blinds us: Romantic love can develop between people who are not really suited for each other, for reasons that include e.g. an early naive internal image of the partner, wist- or wishful thinking, and those pesky oxytocins—often between people who just happen to be in an originally casual relationship. In a next step, love “hides” incompatibilities, makes annoyances easier to tolerate, and, in general, makes the relationship “smoother”. And then the love fades and two people are stuck together who do not belong together… If this is just a long-term relationship, no marriage, no children, there is still the opportunity to move apart with comparatively little pain and trouble, but if there are children or if a marriage has taken place, this is an extremely bad situation.

    As with employment above, long-term relationships are better filtered than could be done with a coin-toss—but not by that much: Many obviously poor matches can be filtered out early, but when it comes to differing between a good and a barely adequate match, or between an excellent seeming and an actually excellent one, enough information is often not present until it is to late.

    Here I have the strong suspicion that the arranged marriages of old (or as practiced by some non-Western cultures) are actually the better way, after allowing for some modifications. For instance, consider a scenario where the respective parents* make a “short-list” of half-a-dozen to a dozen promising candidates with mutual parental approval for the respective (adult!) child, and the children then work** through their respective lists until a mutually acceptable match (based on non-romantic and openly communicated criteria) has been found. Chances are that such a semi-arranged “Vernunftehe”*** would work better than a modern love story. I note that this is not only based on a more rational decision making, but also because the parties will enter the union with a more realistic view of each other and more realistic expectations.**** I also stress that there is a large difference between a marriage not based on romance/romantic love and a loveless marriage: A romantic marriage will often develop into a loveless one over time; a Vernunftehe can equally develop into a loving relationship—and the same type of loving relationship that a successful romantic marriage would eventually see: Love based on long companionship, deep knowledge of each other, mutual care-taking, having gone through hardships next to each other, …

    *In all fairness, had my parents tried this with me, I would likely have reacted very negatively. Then again, it was a long time before my more romantic take on these things started to fade. Today, I have great problems seeing myself married (if at all) outside a “Vernunftehe”.

    **Taking time to get sufficiently acquainted and to clearly communicate and discuss all relevant issues. How long this should take will vary, but going below several days of interaction spread over several weeks seems risky to me. On the other hand, increasing the amount of time too much might cause the decision making to be clouded through emotionality.

    ***Literally, “marriage of reason[ing]”. I prefer the German term over “marriage of convenience”, partially because the literal meaning of the German term catches my intentions better, partially because I am not certain of the exact equivalence: Wikipedia currently has a German link to “Scheinehe” (a sham marriage, e.g. for “green card” reasons) and an internal description that is potentially closer to that term (e.g. in that not only “love” is excluded as a motivation, but also “relationship, family”). Vernunftehe implies a genuine, honest marriage based on reason rather than romance; Scheinehe implies a mere marriage-in-name, with the “spouses” normally never even having lived together. To boot, cf. above, it is not a given that a marriage based in reason will not cause love to develop. (Wikipedia surprises me here, because the literal meaning of “marriage of convenience” has implications more compatible with Vernunftehe than Scheinehe, and my previous understanding of the term was also equivalent to Vernunftehe. A translation with “arranged marriage” would underemphasize the involvement of the spouses and overemphasize that of others.)

    ****Indeed, my impression is that many of the modern divorces go back to one party (usually the woman) having had unrealistic expectations, and then failing to stand by the promise of “or for worse”. (Often in combination with a failure to appropriately communicate these expectations.)

    Barring this, at a minimum, the prospective spouses should have a good long look at each others parents (especially the mother of the bride and the father of the groom) and how the parents relationships have fared. We are not our parents, but there is still usually quite a bit to be learned (with a reasonable degree of likelihood) from them regarding ourselves (resp. about others from their parents)—including much that might not yet be obvious when looking just at the younger generation. (For reasons that include the effects of physical aging and mental maturation, stations in life not yet encountered, and experiences not yet had—especially when it comes to marriage, parenthood, and the like.)

Excursion on knock-out criteria during affluence:
A common luxury problem is that there are “too many” alternatives. This too can lead to poor decision making. For instance, I once heard the anecdote that someone hiring had two sacks of applications and just threw one of them away, giving the reason that a good employee needed to be lucky… More generally, the more candidates there are, the less time there is for each applicant, the greater amount of early filtering is needed, and the more superficial criteria tend to be used. For instance, in a strong economy, I have myself used criteria like “website requires JavaScript” or “they demand my CV as a MS-Word document” to rule out some potential employers without any further research. Both of these criteria do tell me something about the company, but they are still superficial on a level comparable to being unimpressed with a company presentation, and in no way comparable to having worked there for a few weeks—especially since both point to problems that do not necessarily affect the department I would be working in. (However, it could also be argued that boycotting companies who are so bad at web design or so applicant unfriendly is the ethical thing to do.)

Similarly, we are now so inundated with new movies, TV series, books, …, that a very strong filtering is needed for anyone who wants to do anything else with his life than watch TV (etc.); in contrast, in the 1950s, people watched what was available. For the broad masses, such filter criteria could easily become reduced to “have I heard of it from advertising”, “was the trailer good”, or something otherwise mostly being a matter of manipulation by e.g. a TV studio. Even those more refined must use somewhat superficial criteria, e.g. deciding not to give a TV series a fair chance based on negative impressions of others on a rating web site or an own impression from just the pilot—both of which can give a very incorrect impression (for instance, the pilot of a TV series is often one of the worst episodes).

Excursion on “Black Mirror” and partners:
A particular interesting episode of the TV series “Black Mirror” provided some inspiration for the above. It centers around a simulation of relationship interactions between different partners, running at a far, far higher speed than the real world, in order to determine who would be suitable for whom and to make a corresponding recommendation. Something like that (in the unlikely event that it ever becomes technologically feasible) could go a long way towards preventing marriage problems (with obvious extensions to other areas, including employment and elections). Unfortunately, the episode still fell into the trap of romantic love: Instead of simulating what would have happened in the case of marriage, it simulated what amounted to “who feels the strongest attraction for whom” or “which pairing shows the greatest compatibility based on dating”, which implies a romantic notion of a quasi-magical connection between two people.

However, even absent simulations, parts of the approach could still be adapted in real life: Take two complete strangers; put them in a restaurant on a date; if that went well, put them alone in a cottage for a few days; if that went well, make them a couple-on-probation for a few weeks; … (Note that the schedule is considerably accelerated compared to regular dating, implying that certain revelations and experiences also happen a lot faster, in turn making the mutual evaluation a lot faster, in turn making a happy end or a move to the next partner a lot faster.)

Indeed, this (in a fortunate coincidence) ties the two excursions together: I once read an interview with the German literary critic Marcel Reich-Ranicki, who described his methodology for not drowning in unread books while still keeping his filtering somewhat non-shallow: He read the first twenty pages of a book; if these were sufficiently pleasing, he skipped ahead some distance, possibly a hundred pages, and read another twenty pages; if these were also sufficiently pleasing, he read the entire book. (Presumably, his approach also contained other steps of pre-filtering to determine what books even got the benefit of the first twenty pages—I doubt that “Twilight” made his reading list…)

Written by michaeleriksson

June 18, 2018 at 10:25 pm

Disturbing German news

with one comment

Today, I stumbled upon two German news stories that were both highly disturbing, overlapping with some of my writings, and showing how easy it is for the any of us to fall victim to forces that we might naively believe ourselves protected from*.

*E.g. because “the innocent have nothing to fear”, “things like that only happen to others”, …

Firstly, some poor sod has been assaulted in his own apartment, because of a TV program on pedophilia*, through which some people misidentified him as a pedophile** from the program, and took it upon themselves to beat him up so badly that he almost died***…

*According to the article and/or the TV program: There is a fair chance that the label is, for the umpteenth time, abused to include interest in post-pubescent “children” younger than 18, which is in a different realm than pedophilia—sexual attraction to pre-pubescent children.

**From the sparse information given, it is not clear whether he was identified as someone who actually had abused children (or “children”), or as someone who merely felt a sexual attraction towards them. Both are conceivable, considering how many appear to consider it impossible for a pedophile to not control himself; but the latter would make his attackers the more monstrous.

***Whether the attackers deliberate tried to kill him, whether his death was not intended, but at least considered acceptable, or whether an intended lesser attack “just” got out of hand, is not stated. However, if, as it appears, seven to ten people physically assault someone, it is almost a given that “lethal force” applies, irrespective of intent.

There are at least three important points to consider:

  1. That self-proclaimed “good” people who commit evil deeds are worse than the “evil” people who do not—these “good guys” are the true evil, the true monsters. I note that even if the victim had been a child-abuser, chances are that his crimes had not warranted his death; and unless the abuses had been unusually bad, his attackers proved themselves to be worse monsters. Here the victim was innocent…
  2. That it is extremely important to get the facts straight before taking drastic actions. Indeed, one of the reasons why the justice systems in “civilized” countries put emphasis on “due process”, “reasonable doubt”, etc., while strongly limiting self-justice, is exactly to try to prevent such scenarios. Regrettably, innocent people are still regularly convicted—and if a professional justice system can fail, how can a mob of TV viewers presume to take action?
  3. That there is tremendous danger in an attitude of “he is evil; he must not live”, “he has the wrong opinion; he must not speak”, “he does not support our cause; he must not vote”, …

Depending on unknown-to-me details of the case, other points might need making. For instance, if a “passive” pedophile has been grouped with child-abusers, this exemplifies both the danger of seeing opinion/being/character/whatnot and assuming action, or treating them as equal to action, and of believing that what applies to the group applies to each individual member of the group*.

*Interestingly, the politically correct are among the groups most likely to commit this error—despite being among those who complain the loudest of it in others…

Secondly, various apartments have been searched and computers confiscated based on suspicion of “hate postings”. Unfortunately, I have not been able to find examples or quotes of these alleged hate postings, implying that I cannot judge whether these specific instances could have been considered illegal* (as might be the case with “kill all X”), offensive-to-a-reasonable-reader-but-legal, or just everything-not-pc-is-hate-speech**. Irrespective of this, this situation is troubling on several counts, including that confiscating computers is an extreme and improductive measure*** and that going to such lengths based on, as it appears, mere suspicion of guilt jeopardizes the Rechtsstaat. (And is a dubious prioritization of police resources…)

*Note that the German law is unusually strict, especially when anything even hints at support of the old Nazi-regime or its ideas. (This sometimes to a point that the ethical justifiability of the laws seems dubious, and including absurdities like computer games being censored for using Swastikas in depictions of Nazi enemies…)

**During the years that I actually bothered debating on blogs, I saw a great many examples of this. Other examples regularly reach me through the current news, as with [1]. The situation is so bad, that I am not willing to attribute this to sheer incompetence or the inability to see the flawed perspective and the hypocrisy, nor to forgive this by applying Hanlon’s Razor—no, problems on this scale can hardly occur without malice and intellectual dishonesty, by a deliberate use of unfair accusations as a means to an end.

***I note e.g. disproportionately negative effects on the victims of the confiscation; the uselessness of any found evidence through the ease with which digital evidence can be planted; and the uselessness of a search on the computer of a “big fish”, who will have the means to protect himself through use of encryption and similar technologies. See also e.g. [2].

The “chilling effect” of such actions is also disturbing: How do we know that what we say will not be deemed hate speech or illegal speech by someone in a position to cause trouble? What if the police overreacts as mindlessly as in [3]? What if our own words are judged by such absurd criteria as in [1]? How do we know that factual statements, reasonable opinion, attempts at serious debate will not cause the police to knock on our own doors? The simple truth is that we can only hope, and if this trend is carried on, the borders of even de facto illegal “hate speech” will continually be pushed into a more and more unreasonable territory*.

*Based on the comparatively small size of the police action, there is a fair chance that it was directed at outrageous cases—this time around. If no protests follow, this is likely to change… Obviously, what is called “hate speech” (or “racism”, “sexism”, whatnot) in PC circles are very often far from being so, even now.

More generally, I would seriously question whether even the vilest* expression of opinion (per se; without e.g. a call for action) should ever be treated thus. It would be better to restrict measures to expression that also imply an action or a call for action (e.g. “Go kill an X today!”**, but not “All X deserve to die!”***).

*When it comes to anything but the vilest expression, measures like police intervention are unacceptable, anti-democratic, and a violation of the Rechtsstaat. Consider e.g. the relative triviality of the case discussed in [1] and the disproportionate reaction (admittedly by non-police).

**Again, this type of statement is sometimes heard from extremists within the Leftist or PC spheres. Cf. e.g. my discussion of the Charlottesville events.

***Statements that are not uncommon among Leftist and PC extremists.

As an aside, I found the claim disturbing that hate speech would come predominantly from the “extreme Right”*: Not only have I so far seen far more hate from Leftist and PC extremists (especially feminists) than from the “extreme Right”, which makes me doubt the neutrality of this action and suspect a double standard**, but I also suspect the common tendency to consider anyone with e.g. nationalist, anti-immigration, or whatnot opinions to be “extreme Right”, even when other opinions would point to Left, thereby skewing the estimations of the (non-extreme) Left and “Right” among the broad masses.

*Starting with the renewed observation that this is a misnomer, unlike “extreme Left”: The extreme Left consists of people with extreme versions of Leftist opinions or who are willing to use extreme methods to reach Leftist goals; the “extreme Right” does not have the same role relative the “Right” in general. (To which must be added that the “Right” is far more heterogeneous than the Left, and that while the label “Left” can make sense, the label “Right” hardly ever does, except as an opposition to “Left”.)

**I note both that a double standard concerning opinions and behaviors is extremely common among e.g. PC, and Leftist groups, with the most intolerant people often being the ones that complain the most of intolerance in others, the most sexist those who complain the most of sexism in others, etc.; and that there is a considerable skew in German law between the extreme Left and the “extreme Right”. For instance, a few years ago I read a news-paper article on crimes committed by these groups. The main claim was that crimes were more common on the “extreme Right”; however, it was clear from the presented statistics that this was only true due to a legal asymmetry, e.g. in that German law forbids carrying swastikas but is silent on the hammer-and-sickle. When we looked only at non-asymmetrical crimes (e.g. assault, break-ins, …), the numbers were approximately the same.

Written by michaeleriksson

June 16, 2018 at 7:42 am