Michael Eriksson's Blog

A Swede in Germany

Archive for April 2018

Follow-up: Linux vs. GNU/Linux

with 6 comments

In light of a lengthy reply by a user codeinfig to an earlier post on the issue of “Linux” vs. “GNU/Linux”, I revisit this topic.

This in two parts: An extension of the original discussion (partially driven by the reply, but mostly held abstract) and a more specific rebuttal of said reply (formulated in terms of a direct answer).

General discussion:

  1. At the time of my original post, I actually was not aware of the amount of controversy that surrounded this issue, mostly seeing Stallman’s position as an example of flawed thinking, and my intention was (like in much of my previous writings) to point to such flaws. (Possibly, because commercial Unixes dominated my experiences of the Unix-like OSes until the early new millennium.)

    With hindsight, it was highly naive of me to not expect the topic to be “hotter”: This is the Internet, and more or less any question that could cause controversy will be discussed/argued/flame-warred at length—even be it something so trivial seeming as a name. To boot, this issue appears to be almost as old as Linux, giving it plenty of time to have been discussed.

  2. I stress again that I do not claim that “Linux” is an appropriate name when we do not speak of the kernel (cf. previous statements). However, “GNU/Linux” does not solve the involved problem. On the contrary, it compounds it further, because arguments against using “Linux” are the stronger against “GNU/Linux”. (However, this might very well have been different in the 1990s.)
  3. “GNU”, on its own, has at least three different meanings: The OS envisioned by Stallman, the GNU project, and the GNU set of tools/programs. Of these, I would consider the first the least relevant today, because this OS has simply not materialized in its full intended form, even after several decades, and I honestly cannot recall when I last heard that meaning used prior to this discussion. Even as early as 1994, when I started college and made my first contacts with Unix (specifically SunOS), the tools were arguably more important than the OS: The default setup after login consisted of one instance of Bash (running in an Xterm) and one instance of GNU Emacs; the main computer work for the first semester consisted of writing and executing LISP programs using Emacs—even under a commercial Unix version, with its own set of pre-existing editors, shells, whatnots, GNU tools had been preferred. An alternate editor and semi-imitation of Emacs, MG, was typically referred to as “Micro-GNU”*, showing the relative importance of Emacs within the GNU sphere at the time.

    *The actual name was “Micro-GNU-Emacs”, with the intended focus on “Emacs”, with “GNU” only serving to avoid confusion with other (less popular) variations of Emacs. (A distinction that hardly anyone bothers with today, “Emacs” being used quasi-synonymously with “GNU Emacs”, just like “Windows” usually contains an unspoken “Microsoft”.) However, so dominant was Emacs in the perception of GNU that most people shortened the wrong component of the name…

    But, by all means, let us go with the OS-meaning*: We say “GNU” and mean an OS. Even now, however, the use by codeinfig does not make sense. He appears to use “GNU” (resp. “gnu”) as an all-encompassing term for the OS in a very wide sense** or even the whole system, effectively saying that not only is e.g. a Debian system “GNU/Linux”—it is actually “GNU”… This goes towards the absurd, because even when we speak of “GNU” as an OS, the possible interpretations are 1) the whole original vision by Stallman, i.e. “GNU/HURD” and 2) just the “GNU” part of “GNU/HURD” (resp. “GNU/kernel-of-your-choice”). If we take the first, Debian is only even a GNU candidate when the HURD kernel is used (which will not be the case when we have a Linux-version of Debian) and speaking of just “GNU” in the manner of codeinfig is clearly wrong; even speaking of “GNU/Linux” would be clearly wrong a priori. If we take the second, “GNU/Linux” would still be conceivable (before looking at other aspects of the issue), but the equation GNU = GNU/Linux would be obviously incorrect.

    *For simplicity of discussion I will try to stick to this meaning in the rest of the text, where the difference between the three matter and where the right choice is not clear from context. Note that earlier references do not necessarily do so.

    **An annoying problem in this situation is that it is very hard to define the border between OS and application in such a manner that everyone is happy and all circumstances are covered. A more fine-grained terminology would be beneficial, just like dividing the year into winter and summer would be simplistic. (However, this is secondary to the current discussion.)

  4. If we do use the OS meaning, then, yes, I would consider GNU mostly irrelevant today. It is of historical importance and it might very well grow important again, but today it is dwarfed by Linux, various BSD variants (arguably including MacOS), and possibly even the likes of OpenSolaris and its derivatives. And, no, this OS is not what e.g. I have running right now.

    On the other hand, the GNU tools/programs and the GNU project are highly relevant and immensely valuable to the world of non-commercial/-proprietary computing.

  5. GNU/Linux systems are certainly conceivable: Take GNU (in the sense of an OS without the kernel) and add Linux as a kernel. Such systems might even be present today. A typical Linux-kerneled* distribution, however, is simply not an example of this.

    *See what I did there!

  6. Some seem to think that “because system A uses GNU components it is GNU” or “[…] it should use GNU in its name”. This line of reasoning does not hold up: It is simply not practical to mention every aspect of a system (be it in IT, Formula One racing, or house building), and GNU does not today play so large a part that it warrants special treatment over all other aspects, including e.g the X server and associated tools or the desktop. Again, this might have been different in the 1990s, but today is today. Cf. my first post.

    Notably, any even semi-typical Linux-kerneled system of today runs a great variety of software from a number of sources, and limitations in naming like “Linux”, “GNU/Linux”, whatnot, simply make little sense. Let a user name his five or ten most used “regular” applications and his desktop or window manager (depending on what is central to him), and we know something about his system, his needs, and his user experience. For most users, the rest is just an invisible implementation detail. Hell, many only use even the command line as a last resort… (To their own major loss.)

    That GNU possibly was the first major attempt at a free or open-source OS is not relevant either. Consider by analogy Project Gutenberg: Its founder claims* to be the first to think of the concept of eBooks: Should any party dealing with eBooks be forced to include “Gutenberg” in its name, resulting e.g. in the “Gutenberg/Tinder” reader? Or should ordinary book publishers be forced to refer to the original Gutenberg, for using printing presses? No—both notions are absurd. They might deserve to be honored for early accomplishments and, certainly, someone might chose to voluntarily name something in honor (as Project Gutenberg did with the original Gutenberg)—but no obligation can conceivably be present.

    *I very much doubt that this is true. Yes, his idea goes back to, IIRC, the early 1970s or late 1960s, but even back then it cannot have been something entirely unthought of, be it as a vision for the (real) future or as sci-fi. Vannevar Bush published ideas several decades earlier that at least go somewhat in the same direction.

  7. Some arguments appear to go back to a variation of moral superiority, as with Stallman’s arguments (also linked to in my original post) or codeinfig’s below. Notably: Linux is not free (in the sense of free software etc.) enough/does not prioritize freeness enough; ergo, GNU is morally superior and should be given precedence. This too is a complete non sequitur that would lead to absurd consequences, especially because the different parties have different priorities for deliberate reasons.

    Someone who does share GNU’s priorities might, for all I care, chose to voluntarily include GNU as a part of the name of this-or-that. However, no obligations can exist and those who do not share said priorities have absolutely no reason to follow suit. More: It would be hare-brained if they did…

As for the more specific reply*, I start by noting that there are clear signs that you** have not understood (or misrepresent) what I am actually saying, and that it is hard to find a consistent line of reasoning in your text (your language does not help either). If you want another iteration of the discussion, I ask you to pay more attention to these areas.

*I have left some minor parts of the original reply out. There can be some changes in typography and formatting, for technical reasons. I have tried to keep the text otherwise identical, but it is conceivable that I have slipped somewhere during editing or spell-checking—always a danger when quoting extensively. The full original text is present under the link above, barring any later changes by codeinfig.

**I address codeinfig directly from here on.

“whether the emphasis is on GNU alone or GNU and HURD in combination matters little for the overall argument.”

it covers more of the argument than you realise, and it is the flaw in your argument.

you are making gnu out to be a tiny subset of what it is, and making it less significant (details aside, you are greatly diminishing what it is) so that it pales next to “linux.”

this is unfair for several reasons— first, you do not understand what gnu is. you think gnu is just some software that isnt useful to the (everyday) user. its a misrepresentation that would lend at least some weight to your argument, if it werent a misrepresentation.

The “GNU” vs “GNU/HURD” distinction only makes sense if we abuse “GNU” in the manner I have dealt with above. The rest is largely a distortion of what I say. In particular, I have never claimed that GNU would pale to Linux (in the kernel sense)—I claim that it pales in comparison to the overall systems. (Which really should be indisputable.) If you re-read my original post, you will find that I clearly point to uses of GNU tools that are not obvious to the end user; however, the simple truth remains that for someone who does not live on the command line or in Emacs, the overall importance of GNU is not so large that it deserves special treatment over some other parties.

You do not seem to understand how many different components of various types and from different sources go into building e.g. a Debian system (be it as a whole or as the OS), with many of them present or not present depending on the exact setup. We simply do not have anything even remotely close to an almost-just-GNU system with Linux dropped in in lieu of HURD, which seems to be your premise.

gnu was “the whole thing” before linux was a kernel. the web browser is not “linux” either, it is a browser. but we call it “linux.” xwindows predates “linux” by nearly a decade, but we call it “linux.”

when we call these things “gnu” you fail to understand that *that is what it was called already* and “linux” is no more a web browser than gnu was at the time, but somehow its ok for linux to presume itself to be all those things, but its “riding coattails” if gnu helps itself by being included.

Here we have several misrepresentations, e.g. the claim that the browser would be called “Linux”—this simply is not the case. Neither were those things already called “gnu” in the past. Notably, in the time before Linux-kerneled systems broke through, the clear majority of users were running commercial Unixes, e.g. SunOS, with GNU either absent or represented through a few highly specific tools, e.g. Emacs. While it is true that GNU was conceived as “the whole thing” (by the standards at its conception), this does not imply that it actually is “the whole thing” when it is included in a greater context and at a much later date. By analogy, if someone launches his own car company A, and another car company B, thirty years later, uses parts delivered from A, other parts from other companies, and parts that it has produced on its own, should B’s products then be referred to as “A” or as “B”? Obviously: “B”. In addition, due to the absence of HURD there is no point of time prior to Linux where GNU actually, even temporarily, was “the whole thing”, making the claims of precedence the weaker.

Note the item on historical influence above.

Note that my original formulation concerning coat-tails, a) referred explicitly to “better-known-among-the-broad-masses”, which is indisputably true and makes no implication concerning e.g. practical importance, b) was used to demarcate the outer end of the spectrum of interpretations of the situation—I never say that Stallman’s intent is to ride the coat-tails of Linux, only that this is the worst case interpretation.

the whole idea that linux is entitled to do this but gnu is not is special pleading *all over the place.*

its special pleading that strawmans the heck out of what gnu is in the first place— with a generous “side” of ad hom for why stallman thinks we should call it that. oh, its his quasireligious views…

No such special pleading takes place. I clearly say that “Linux” is a misnomer—but that “GNU/Linux” is a worse misnomer through compounding the error.

Watch your own strawmanning!

no, his arguments are not quasireligious. they are philosophical and even practical. thats an ad hom attack, and the only thing religion has to do with it is in parody (and other related ad hom from critics.)

If you actually read what he says, you will find that he is quite often religious/ideological and lacking in pragmaticism: He has an idea, this idea is the divine truth, and thou shalt have no other truth. Watch his writing on free this-and-that in particular.

i suspect that at some point (to be fair, you havent yet) you will accuse me of being some kind of stallman devotee. i was an open source advocate first, but i switched to free software after years of comparing the arguments between them. open source is a corporate cult, partly denounced by one of its own founders.

i switched to free software because it lacks the same penchant for rewriting history, for splitting off and then accusing those who didnt follow of “not being a team player,” and basically is more intellectually honest than open source and “linux.”

but its like an open source rite of passage to nitpick about “gnu/linux,” and it tends to follow a formula. you left out the part about how “free” is a confusing word with multiple meanings— sort of like “apple computer.”

I have not yet, and I will not here either—and it would not matter if I did: Your arguments remain the same irrespective of whether you are a devotee or not.

As for your motivations to prefer free over open: They have no relevance to the naming issue.

“To the best of my knowledge, no-one, Stallman included, has suggested that we refer to GNU (!) as GNU/Linux.”

in most instances he does. your definition of gnu is in fact, partial and subset, so he has never suggested we refer to that subset by anything. i dont believe he has ever referenced the subset you call “gnu” at all.

See the general discussion above and why this does not make sense. (But I do not rule out that Stallman too can have said something that does not make sense.)

” `The question is rather whether a Linux (sensu lato) system should be referred to as GNU/Linux.’ “

no, thats a loaded question, and a fine ingredient for a circular argument. “its already called linux.” well, it was already called gnu. but again, *somehow* linux is entitled to do that and gnu isnt, even though gnu was already calling it that.

for stallman and many others who have not been swayed by over a decade of these “dont bother calling it gnu” articles, the question is whether gnu should be referred to as only “linux.”

It was never called GNU and even if it were, you cannot demand that others, building new products, where GNU is a subset, propagate the name ad eternam. Cf. above.

the answer to that question is cultural, and already explained— *if you care about software freedom* then gnu is a signifier. to a programmer this makes sense— its self-documenting.

from a marketing perspective, this is ridiculous. to a “linux” fan (to torvalds himself) this is riduculous. to me, its a *lot* more honest. however, what stallman has done is establish a brand that shows something living up to a promise.

“gnu” is quality control (a brand) for user rights. and linux really isnt. it really really isnt, but why it isnt is a separate debate. im not trying to write you an oreilly book here.

so again— if you want to signify user freedom, call it gnu/linux. (if it were up to me, the /linux would be dropped, stallman was trying to be fair.) if you want to signify whatever the heck “linux” stands for, call it whatever you want. i call fig os “fig os,” but fig os is a gnu/linux distro.

This is the flawed moral argument discussed above. In addition, why should e.g. Torvalds include “GNU” to stress free software when free software is not his priority? If he does not include it, how can I (you, Stallman, …) presume to alter the name based on having another priority?

As an aside, if the reasoning went in the other direction, i.e. “You are not free enough to use our name, so stop using it”, this would make a lot more sense. (Assuming that someone sufficiently non-free did use “GNU”.)

“Not at all: A world-view in which GNU is so important that it would warrant top-billing in the context of a Linux system is outdated—not GNU it self.”

thank you for clarifying, but you have not explained why gnu is not important enough to warrant top-billing, except to say that applications are more important to users who dont know why gnu is important.

Again: There are many components from different sources that make up a system. GNU is just one of them. (If you feel that GNU truly outweighs all others to such a degree that it warrants top-billing, it is up to you to prove this.) Further: What is important to the user is what matters in the end. If, by analogy, Bash or Emacs behaves the same, but is now implemented in Java or C#, it remains Bash resp. Emacs. The developers might be in understandable tears, but the world goes on. Implementations are fungible to the users—the result of the implementation is not. Hell, when I use Vim, Bash, and Tmux* under Cygwin, I have almost the same experience as when working under Debian, even when actually on a Windows machine… Even speaking of “Windows user” and “Linux [or whatnot] user” makes a lot less sense today than it did in the past, and it is often more sensible to speak of e.g. “Bash user” and “Vim user”.

*Note that of the three, only one is a GNU program.

your article is mostly assertions, and you do start to explain some of them though i still think it rests mostly on ad hom and assertion. its a very common set of assertions too— made year after year after year, i even made them myself once long ago.

Ad hominem and assertions pretty much match what I see in your writing…

“I am saying that building e.g. a Debian system without GNU is conceivable.”

thats hardly fair. gnu has been vital to all this for a quarter of a century (bsd can make a similar argument, considering that the only reason gnu was necessary was they were tied up in gaining the rights to their own work.)

I doubt that GNU has been that important for a whole quarter of a century, but even if it has been, that is irrelevant: It is not that important now. This is not a matter of fairness. Light-bulbs were great; today, they have been replaced by LEDs and other newer technologies. (Be it for technical reasons or through legislation.) I do not look up at my ceiling lamp and say a quick prayer to Edison (or one of the other inventors involved in the development of light-bulbs).

you make gnu less essential by creating a strawman version of what it is, so that you can say “but this isnt a good enough reason to warrant top-billing.”

we cant agree on the validity of your argument if you insist on misrepresenting what you weigh the importance of.

in fact, your argument suggests to me that the name is more important than the thing itself. i mean— gnu wouldnt be essential if this mostly-hypothetical thing like gnu were created instead! but thats no reason to call my entire operating system:

gnu! linux!

stallman has said lots of times that the name really isnt important. seems to really go against this whole thing, eh? people miss his asterisk where he says its important that everything gnu exists to accomplish is not forgotten for a side movement that reframes years of work to deliver freedom to the user as “just a practical way to develop software.”

Most of the above is more of you misrepresenting (or misunderstanding) what I am actually saying. As for the name vs. the thing: The topic of my post is the name and flawed reasoning around the name; ergo, I deal with the name and the flawed reasoning.

open source creates the need for this, stallman says “well if youre going to misrepresent everything we do, at least give us three letters of credit for this enormous amount of software youre relying on.” and people say more or less: “wow, the nerve of THIS GUY!”

its so funny because all he wants is for people to not forget that the entire point of all this was free software.

His entire point was free software. Torwalds’ (and many others’) is not. And again: If GNU was given credit in the name, then there are other parties with a similar right.

your argument is whether it should be called gnu/linux or not. but it never addresses the years-old argument of why it should be called gnu— it makes up its own reason, and then steps on it.

it really is a giant strawman. and i appreciate that you are almost certainly sincere and wouldnt create a strawman just to be a jerk. but its still a strawman.

Your claims make no sense, unless you truly are under the misapprehension that a typical systems consists of the kernel, various GNU components, and few trivial other bits. This is very, very far from the truth.

“GNU GPL, however, is irrelevant for the functioning, it would be easy to replicate something similar,”

haha— it isnt irrelevant at all, ask torvalds if it is. it isnt easy to replicate something similar either, and its less easy to get people to use such a thing. pulling off copyleft (when a billion dollar corporation was heavily dedicated to defeating it) was a serious coup. youre making it out to be a bunch of words in a file.

go make a gnu gpl— go ahead. show me. have anybody you can find to help in on it, too. have someone show me how easy it is.

“there are other available licenses. Notably, such other licenses, e.g. something in the Apache or BSD families, are often preferred by people outside GNU, because these parties have other priorities than free software.”

and thats the thing. they dont do what the gpl does, they dont achieve what the gpl does, but you consider them replacements. its apples and oranges.

Firstly, you assume (again) that copy-left, free this-and-that, whatnot is the priority of everyone. It might be your and Stallman’s priority, but it simply is not a global priority—and it is not needed to build e.g. an open-source computer system. Linux, Debian, …, does not need the GPL to exist.

Now, if someone does want a copy-left license? Firstly, there are other copy-left licenses around, if possibly with a somewhat different coverage. Secondly, combining another existing license with aspects of the GPL and/or with a few days research by a lawyer should yield something quite passable. (True, there might be few issue to sort out over time, e.g. due to ambiguities or complications with different jurisdictions, but not something that would require several decades to build.)

“GCC is mostly important for building the system, not for using it.”

special pleading, all over the place.

Not in the least: How would you justify include the compiler used to build the system as a component of the system? (Except for those proportionally rare cases when it is actually used to compile other programs when later using the system.) See also below.

“In fact, even now, many build setups contain explicit checks for the presence of GCC and automatically fallback to CC, should GCC be absent.”

and this is a bit of trivia, because all this stuff we have now would not exist (and would not be maintained) if everyone had to use cc. its like you know the significance of gcc but choose to ignore it when its convenient to your argument.

open source would not exist without gcc. linux might, but not the linux we have today. some little usenet gem that wasnt developed by half as many people— because they needed gcc to do it.

That is a very far-going claim. Can you back it up? I doubt it. In the case of open source in general, it is definitely incorrect, as can be seen by the many projects that use e.g. Java instead of C… Also keep in mind that in the absence of GCC, someone is likely to have started to improve CC or to build a more suitable compiler as need arose. Consider e.g. how GIT came into being; note that GCC as it is today is a very different beast from what it was when Torvalds started his work; note that GCC contains much that is not needed for Linux in the first place (e.g. unused languages) or is not essential for the existence of Linux (e.g. compilation for architectures outside the of the main-stream PC processors).

why even mention cc when you could have talked about clang instead? because i already addressed that when i talked about bsd, and because the part about cc is hypothetical (and at best, unlikely.)

One of several claims that make poor sense even on the sentence level. Besides: When did you address CC? Why would what I say about CC be unlikely?

As for Clang, I was not aware of it until now (but did consider mentioning LLVM in addition to CC, or the possibility of having started the original development with even a non-Unix compiler). However, its existence proves my point: Even if CC would not be a realistic replacement for GCC today, another tool definitely is. And: Other tools tools capable of filling the role of GCC have been possible at any point.

“glibc is possibly the most deeply ingrained dependency (and a better example than my original GRUB); however, this is still just one library.”

and linux is just one kernel. so what? its a monolithic kernel, and glibc is a monolithic library. theyre both enormous. you cant make them smaller by counting units, thats absurd.

glibc is far smaller and the kernel, by its very character, is the core of an OS. (glibc is not even the largest individual library—quite far from it, actually.) What you mean by “counting units” is not clear to me. The relevance of whether they are monolithic or not is lost on me.

“Here too we have the situation that glibc is not used because it is the only alternative, just the best.”

so once again, we shouldnt call it “gnu/linux” because gnu is just a bunch of vital components that arent vital because you could easily replace them with a bunch of drastically inferior alternatives that no one actually wants.

hmpf. yes, im taking some liberties with my version of your argument, but only to try to get its author to appreciate how much of a stretch it is.

“As with GCC, its absense would simply have led to something else being used.”

so *hypothetically*, gnu doesnt deserve top billing. because it could be less important than it is, if it werent.

You miss my point: That if we look at the situation as it is and say “part X is important today; ergo, if part X had never existed, the whole would not exist”, we ignore both the possibility of a replacement that would still have made the whole viable and the considerably likelihood that something else would have evolved over time to fill the same role or that the role would have been covered in a different manner. If we look at the situation today and see that just removing e.g. glibc would cause a given system to fail catastrophically, we cannot conclude that the system would not have existed had glibc not been present in (hypothetically) 1990—and therefore we cannot conclude that the existence of the system is contingent on glibc and, by implication, GNU. As a consequence, when you say “i said without gnu. no gnu gpl, no gcc, no glibc. you go right ahead, since gnu is irrelevant now. remove it, and find out what you get”, the answer is “without GPL, GCC, and glibc, we would see something that is recognizably approximately what we have today”. We might have ended up with an king penguin instead of an emperor penguin, but we are still talking penguins. Now, a scenario that removes GNU entirely from the early Linux development, that could have been a very major problem—but that does not imply that Linux and/or Linux-kerneled systems cannot exist without GNU today or that e.g. glibc is so central that they would never have come into existence without it.

“Even now, keeping the interface intact and replacing the implementation with a non-GNU variation would be technically feasible.”

but then, why should we rewrite glibc just to deny gnu the billing it allegedly doesnt deserve now?

That is not what I suggest: The point is that if glibc was no longer an option, hypothetically because a GPL violation necessitates its removal, a work-around is available. Yes, this might be tantamount to a team of surgeons operating around the clock to put in an artificial heart that buys the patient time until a real heart transplant is possible; no, it does not equal a dead patient.

“arguments that speak against referring to a system by the name of its kernel also speak against using the names of individual libraries, build-tools, and whatnots.”

except that you are oversimplifying the “linux is just a kernel” argument, failing to understand what people actually want with the name “gnu/linux,” not aware of why they want it, and making the name out to be more important than what the name refers to.

Not at all: The only way I can see to make your statement make sense is to posit that “GNU/Linux” would actually be enough to cover the entire OS (at a minimum) or the OS + a considerable portion of the rest of the system. This, however, is not even close to being the case. It is conceivable that a working “GNU/Linux” (only) system is buildable today, but it would not be the equivalent of e.g Debian, Fedora, Suse, …

As to name vs. thing, cf. above.

“I grant that Linux would conceivably not exist today without the presence of GNU in the past”

nor the present.

Prove your assertion. Remove GNU today, where do you see the insurmountable obstacle? (As opposed to the far more likely transitional period of blood, sweat, and tears.)

“if we speak of the system as a whole (the sensu lato), I refer to my post for a discussion why GNU is no longer important enough to define the system.”

now it is a discussion. it was an assertion, which leaned a bit on misunderstanding and special pleading and ad hom.

I strongly disagree.

“I do. Cf. above and your apparent confusion of what GNU is.”

i am not “confused” about what gnu is. gnu was from the beginning, a fancy-pants latin phrase (“the whole thing.”)

since the 1990s, a bunch of people have suggested that it is just a bunch of applications that everyday people dont really use.

your argument is built around the suggestion being a fact.

See the general discussion for various meanings and your incorrect interpretation.

given that the conclusion of your argument is that we should agree with them, i would call your entire argument circular.

Your claim makes no sense, shows that you have not understood me, and raises doubts as to whether you understand what a circular argument is.

we dont have to rewrite history. however, i would say you argue (quite unintentionally, beacuse i think you really do misunderstand the nature and premise of your own argument) that history doesnt deserve to not be rewritten.

it is not necessary to rewrite history to refer to gnu and “linux” instead of gnu/linux.

Again, you make no sense.

it is necessary to rewrite glibc, the gpl, and reestablish so much of gnu that you generously refer to as “linux,” in order to make most of the PREMISES of your argument into facts.

If you believe that, you definitely have not understood what I am actually saying.

if the premises are false, the argument isnt sound. in your reply, you spend a lot of time defending the logic of your argument based on a more hypothetical premise.

the premise of your argument was just false. the logic is heavily just assertion.

You have not shown that my premise is false; yet seem to rely on faulty premises or faulty understandings, yourself.

there isnt any need for ad hom, its simply wrong. but thats not important.

Where have I used ad hominem? Do you understand what this actually implies?

what matters is that in ten years, people will still be trying to get “gnu” removed from “gnu/linux.” and we can have this debate all the way there. i do hope we get breaks for the restroom though.

I do not see that as something that really matters and the opinion that “GNU” does not belong in the name is likely to grow stronger for reasons that include a further lessening of GNUs practical relevance, a smaller proportion of people who at all know of GNU, and a growing importance of both the distribution aspect and the desktop aspect.

“I do consider free software highly beneficial, but free software is not a core priority of Linux”

and that is exactly why stallman says it shouldnt be called just linux. because free software is not a core priority of it.

thats his entire argument. if you care about free software, call it gnu.

it has nothing to do with percentages of code, it has nothing to do with riding coattails.

it has everything to do with why gnu was created in the first place. not to write glibc, not to give you a web browser.

gnu was created to give the user freedom. and if you care about that, calling it “linux” ignores the original purpose, paints something relevant in modern times into enough obscurity that people think its just about user applications— and lets “linux” come along and assert boldly that freedom doesnt matter.

its not about ego, religion, or percentage of code. its about whether you care about freedom or not.

See the general discussion for why this is a faulty argument when it comes to the name.

funny thing, its always implied that stallman is just nitpicking, but year after year (after year after year) open source nitpicks that “gnu” isnt important enough to be in the name.

I, personally, have implied no such thing. That “GNU” does not belong is not nit-picking.

free software never convinces everyone to add “gnu” and open source never convinces everyone to drop it, but both sides continue to nitpick this for decades.

The division into free and open software should not play a role when discussing the name issue. If it does, something is fundamentally wrong with the approach.

your argument is most likely honest, if lacking context and history. the argument itself has its own history, though open source doesnt learn from the failure of the argument youre making, it just keeps reasserting it.

the history of your argument is that it is constantly made— 20+ years running now.

i made it myself, over a decade ago— i abandoned it because it was silly.

From what I have seen so far, the lack of historical and contextual understanding seems to be more on your side, with the one reservation that I was actually not aware of the extensive history of the argument. The reasoning you apply today, e.g. that what I refer to as moral superiority above should affect the name, is the silly part.

Advertisements

Written by michaeleriksson

April 25, 2018 at 6:25 am

Consumer rights and force majeure

leave a comment »

A major consumer problem in Germany (and likely large parts of the rest of the world) are “force majeure”-style* restrictions on performance and liability**.

*Some of these are related to actual “force majeure”, others merely follow a similar pattern of “what happened does not match my best case scenario; let the customer suck it up, then it is not my problem anymore”.

**It is often the case that the performance of a duty is legitimately hindered by an external event, but that does not automatically imply that the hindered party is legitimately free from liability, the duty to compensate others, whatnot.

In some cases, these restrictions can be seen as justifiable or necessary, e.g. in that an insurance company would not cover city-destroying bomb damage as seen in WWII: Chances are that an attempt to do so would simultaneously bankrupt the company and leave the claimants with only a fraction of the compensation they need. These will mostly be actual “force majeure” cases, but “force majeure” does not automatically make it such a case.

In many others, these restrictions are arbitrary, unfair, unexpected*, or otherwise customer hostile. Some cases can even be seen as borderline fraudulent, because the customer is mislead about what he can expect for his buck or what the true cost of a certain service is.

*In the sense that most customers and independent observers would have had a different expectation.

To look at a few examples:

  1. The events that can lead to a certain restriction are often under the close control of the business and entirely out of the control of the consumer.

    For instance, a few years ago I hired a storage unit—and was met with clauses that basically restricted the liability for any damage to my goods to nothing. This included water damage and theft.

    Now, who decides where water pipes are drawn and how they are maintained? Who what locks are used? What alarm system? Whether a guard will be present? To entirely avoid the risk of water damage and theft is impossible, but they can be reduced quite considerably—or made comparatively large. Notably, not making the business liable ensures that it has little incentive to actually invest in the security of the facilities, thereby increasing the risk that e.g. a break-in will occur…

    As an aside, the latter point includes a disturbance in the functioning of market forces: When the business is liable, it will (to the best of its ability) try to find a point where the overall expected cost from preventative measures, insurance, break-ins, … is minimized, leaving the overall economy a little bit better off. This especially with bulk insurance potentially being cheaper and definitely less effort than individual insurance (as discussed in the next item). When it is not, there is no-one in a position to balance these factors, the expected cost rises, and the overall economy is worse off.

  2. Scenarios like these hide and/or increase the true cost from/for the customer: He can either hope for the best, with an unknown risk/expected cost, or he can insure himself independently, which increases the cost not only through the amount to pay, but also through the extra effort to investigate alternatives* and going through the paperwork—assuming that there even is a reasonable insurance available… Far better would be for the business to be liable, with the business optionally taking out a corresponding insurance at a bulk rate, with a corresponding non-hidden increase of the rental fee, allowing customers to see what they actually pay.**

    *Particularly perfidious businesses are likely prepared to swoop in with an insurance-on-top-of-the-rent, where many customers will be willing to pay an over-the-market fee for the comfort of not having to do research and whatnot—for something that should have been included in the rental fee to begin with…

    **On the detail level, there are additional issues to resolve, e.g. where to cap the possible compensation and to what degree the customers must disclose the contents of their units in order to be covered. This, however, is unimportant for a big-picture discussion. (The same can apply to other points under discussion.)

  3. The events are often given a pseudo-“force majeure” aura or painted as unexpected when they are not (overlapping with the first item).

    For instance, in Germany it is common that just two centimeters of snow throws the railway system into chaos, while the railway companies virtually every single year are “surprised” by snow at some point in December. Sorry, at these latitudes, no-one has the right to act surprised when it snows during the winter, and corresponding measures to survive two centimeters of snow must be taken. (That this is not an impossibility is proved by Sweden, where problems of this size occur much later.) Sadly, the attitude seems to be that “because it snows on so few days of the year, we ignore the possibility of snow and let the passengers take the hit when it does snow—after all, we do not have to compensate them*”; something also seen in e.g. the lagging maintenance of the infrastructure, which causes many unnecessary problems and delays. (Here again, we also have a hidden cost issue: Would you rather pay 25 Euro and get where you want on time, or 20 Euro and be thirty minutes late on every second trip?)

    *A 60 (!) minute delay entitles the customer to a 25 (!) % refund of the ticket price. 120 (!) minutes gives 50 (!) %. For less than 60 minutes nothing is given; for more than 120 there is no further increase. Actually getting the refund is such a hassle that it often not worth the effort. Few provisions are made for other types of compensation and none for e.g. “I missed my flight” or “I missed two hours of work” scenarios. (Cf. official information from Deutsche Bahn.)

    For instance, strikes and the like regularly lead to service interruptions with no compensation—even when occurring so fast that customers have no reasonable chance to adapt their plans*. However: Firstly, strikes are a part of doing business and something that must be considered accordingly. Secondly, whether it comes to a strike is largely within the control of the business**, but not the customer, and the business*** should carry the responsibility vs. the customer.

    *For instance, a few years back I was to fly from (for the first time) Düsseldorf Airport to Munich for an interview. To ensure that nothing went wrong, I went to the airport several days in advance, had a look around, found out where I needed to go, etc. On the day of the flight, I returned—and found that there were hundreds and hundreds of people queuing to reach the security checks, and eventually realized that I had no chance of catching my flight, despite being there several hours in advance of boarding. The reason: A strike by security personnel that had been announced just a day earlier… A very healthy regulation would require strikes to be announced sufficiently far in advance that customers and employers can react to reduce the damage to the customers as an innocent third party. (What time frame is involved will depend on the circumstances, but with air travel as much as two weeks might be reasonable; for the local library two days might be enough.)

    **No, an employer cannot unilaterally tell his staff not to strike; however, he can influence how negotiations go, he can judge the damage done by strikes vs. the damage done by agreeing to demands and act accordingly, he could possibly negotiate a way of striking that is less damaging to the customers, etc. As long as the customers carry the main consequences, however, he has lower incentives to avoid the strike.

    ***However, with the option to in turn make demands towards the strikers in at least some cases, notably when the strike was “wild”.

    As an aside, a major problem with strikes in Germany, from a union perspective, is that they often do more to turn the customers against the union than to convince the employers. Forcing businesses to compensate their customers for the effects of strikes could change this. (As could a more rational strike behavior, but that is another topic entirely.)

  4. Flood damage is usually excluded in a blanket manner from regular insurance policies, because the insurance companies are afraid of being stuck with thousands of large simultaneous claims in a river-rich country.

    However, this goes contrary to a layman’s expectation and likely leads to many not having the coverage they expect. Further, separate coverage is not always on offer, potentially forcing the customer to go to a second insurer, with the extra research and whatnot. I would also strongly suspect that the cost of separate coverage is higher than a built-in coverage for the costumer, because the overall fees and risks cannot neutralize each other and because a greater markup is added.

    A better solution would be to include flood damage, raise the fees for people in high-risk* areas (possibly with the option of a voluntary opt-out), and increase the re-insured amount. The raised fees ensure that the insurance company does not lose money; the re-insurance that it is not crippled by a major flooding.

    *Consistent with the idea that whoever controls the risk pays the additional cost; however, note that the control here is lower than for the storage rental above, e.g. because the likelihood and consequences of a flooding river is to some part controlled by (other) humans, not just nature, and because we cannot always choose where we live.

Notably, German businesses often have the attitude that they have no liability for anything, that it is always the customer’s (or someone elses) problem, or similar—even in the absence of a corresponding contractual clause, real or pseudo-“force majeure”, whatnot. (In addition to my own experiences and what I have heard from others, I have read a great number of articles dealing with consumer issues in a German context. The situation is horrifying.) Examples include retailers systematically sending the customers directly to the manufacturer when a product is defect*, (often for-charge) hot-lines that are staffed with incompetents, various hoops that have to be leaped through in order to get compensation**, pretending to be unaware of legal rights or deliberately telling customers something that does not match the legal situation***, …

*According to German law, the retailer, not the manufacturer, is obliged to correct problems. The retailer might later have the right to turn to the manufacturer for help or own compensation, but that is between him and the manufacturer—not the customer and the manufacturer.

**A scenario that I have encountered several times, myself, is that I write an email, detail what is wrong, and receive a form back that neither allows nor requires any information not already provided—together with the refusal to treat my complaint unless I spend five or ten minutes (redundantly!) filling out the form, another twenty running by the post office, whatnot. Other common problems include repeated requests for information already provided, requests for mandatory irrelevant information, refusal to handle anything by email, … It is quite common that the effort to receive compensation (see a problem solved, replace a defect product, whatnot) is so large that it exceeds the intended benefit—honi soit qui mal y pense…

***For instance, as I made my last order with Amazon Germany, possibly some fifteen years ago, my request to cancel a purchase was originally denied with the claim that Amazon’s help pages (!) ruled this out—never mind that German law gave a fourteen day return right, no questions asked, for any online purchase…

As an aside, expanding on a few items above, I am generally strongly in favour of whoever controls the risks/costs and who gains the benefit from a certain behavior or whatnot also carrying the costs/risks/consequences/… Above, we have e.g. the principle that whoever controls the risk of a break-in has the responsibility to reimburse others for not preventing a break-in. Other examples include e.g. that when someone performs an action to his own advantage and (negative) externalities occur, he should compensate others for these externalities*.

*Consider e.g. a factory that earns well while polluting (compensating efforts, some type of environmental tax, or similar, is called for) or a land-lord who performs extremely loud renovations over months in order to later increase rents (tenants should be compensated for reduced quality of life).

Written by michaeleriksson

April 19, 2018 at 10:54 pm

“Linux” vs “GNU/Linux”

with 5 comments

A sometime claim is that “Linux” is an inappropriate term (when not referring specifically to the kernel) and that “GNU/Linux” would be better—especially by Richard Stallman, who is the founder and main force behind GNU…

However, this view is at best outdated*—at worst, it is an attempt to ride on the coat-tails of a better-known-among-the-broad-masses project. Most likely, however, it is a sign that Stallman is too fixated on his own vision of “GNU/HURD”**, and is unable to see that there are other perspectives on the world: Since his focus is on GNU, those who use Linux instead of HURD obviously appear to use “GNU/Linux” instead of “GNU/HURD”. This, however, has very little relevance for the typical Linux user:

*GNU used to be a much bigger deal than it is today, for reasons both of changing user demographics/behaviors/wants and of an increased set of alternative implementations and tools. Certainly, Linux (in any sense) would have had a much tougher time getting of the ground without GNU.

**HURD was conceived as the kernel-complement to GNU roughly three decades ago—and has yet to become a serious alternative to e.g. Linux.

The general criticism that Linux is just the kernel and that the user experience is dominated by user programs (and other non-kernel software, e.g. a desktop) is quite correct. (This can be seen wonderfully by comparing an ordinary Linux computer and an Android smart-phone: They have very little in common in terms of user experience, but both use a Linux kernel. Conversely, Debian has made releases that use a non-Linux kernel.) However, in today’s world, most Linux users simply do not use many GNU programs, they have correspondingly little effect on the user experience, and a functioning Linux system entirely without them* is conceivable.

*The main problem being “hidden” dependencies. For instance, most Linux computers use GRUB for booting and GRUB is a GNU tool. However, none of these hidden dependencies are beyond replacement.

For instance, a typical Linux user might use Firefox or Chrome (both non-Gnu), LibreOffice (non-Gnu), a few media applications (typically non-Gnu), … Even most parts of the OS in an extended sense will typically not be GNU-programs, e.g. the X-server, the window manager, the log-in manager, the network manager, a desktop environment, … The best way to approximate the user experience would likely be to speak of e.g. “distribution/desktop” , e.g. “Debian/KDE”*, especially seeing that most desktop environments insist on providing their own, entirely redundant tools, for tasks that more generic tools already do a lot better, including text editors, music players, image viewers, …

*KDE is a user hostile disaster that I strongly recommend against, but it is likely still the most well-know desktop environment. Generally, not everyone uses any desktop, but most do.

Even those, like yours truly, who actually do use a lot of GNU programs are not necessarily bound to GNU: Most important GNU tools are re-implementations of older tools and there are alternate implementations available even in the open- and free-source worlds. Are the GNU variations of e.g. “ls”, “mv”, “awk”, better than the others? Possibly. Would it kill someone to switch? No. Even a switch from Bash to Ksh or Zsh would not even be close to the end of the world. Admittedly, there might be some tools that are so significantly better in the GNU-version that users would be very troubled to switch (gcc?) or are not drop-in replacements (e.g. gnumeric). These, however, typically are either developer tools or have a small user basis for other reasons. Most modern users will not actively use a compiler—or will not need the extras of gcc for their trivial experiments. Most users will opt for a component of an office suite (e.g. LibreOffice) over gnumeric. Etc.

For that matter, even on the command line, my two most extensively used programs (vim, mplayer) are not from GNU either…

Yes, using “Linux” is misleading (but generally understood correctly from context); no, using “GNU/Linux” is not an improvement. On the contrary, “GNU/Linux” is more misleading, shows a great deal of ignorance, and should be avoided in almost all cases*.

*An obvious exception would be a situation where GNU is the core topic and a contrast between GNU-with-the-one-kernel and GNU-with-the-other-kernel is needed.

GNU still plays a very valuable role through providing free-software alternatives for many purposes. This role, however, is not of a type that it justifies “GNU/Linux”.

As an aside, Stallman’s own arguments focus unduly on the free-software aspect: Most of his text seems to argue that GNU is valuable through being more keen of free software than Linux—something which is entirely irrelevant to the question of naming. (In general, Stallman appears to see free software as a quasi-religious concern, trumping everything else in any context.)

Written by michaeleriksson

April 14, 2018 at 4:33 am

Other aspects of opinion than right and wrong

with 2 comments

I have long been convinced that being right is not the only aspect of opinion that matters: We also have to consider factors like why a certain opinion is held, whether it is “epistemologically sound”, and how willing someone is to reevaluate and (potentially) change it.* For instance, I have repeatedly observed that it is more rewarding to discuss something with someone who has the wrong opinion for a good reason, than with someone who has the right opinion for a poor reason. For instance, the main difference between a good scientist and a poor or non-scientist is not the level of education and experience, but how well they respectively fare in these regards.

*However, people who do poorly in these regards are disproportionately likely to also be (and remain) wrong.

In this, I have largely been driven by my observations of many PC and/or Leftist debaters, takes on religion, various superstitions, etc. People in the relevant groups often score very low on all these criteria: They do not only believe in something which is dubious or even outright and provably wrong—they also hold their beliefs for poor reasons, ignore evidence to the contrary, and refuse to change their opinions no matter what. However, I can also see strong parallels with how my own approach has changed as I went from child to teenager to adult, as well as how my recollections of other children and teenagers stack up to (at least some) adults.*

*Unfortunately, these comparisons usually involve different individuals as representatives for different ages, rather than a longitudinal comparison of the same individuals as they grow older.

Contrast e.g. someone who believes that Evolution is true based on an understanding of the proposed mechanisms, an exposure to fossil records, some knowledge of cladistics, … with someone who believes it “because my school book said so”. Or contrast this again with something truly mindless: “many Republicans are Creationists; I am a Democrat; ergo, I must believe in Evolution”. (This attitude, sadly, does not seem to be as rare as one would hope.) They all have an opinion considered correct by the overwhelming majority of scientists (and me)—but they do so for so different reasons that the one version of the same opinion cannot be considered equal to the other. Notably, it would take a very major change of influence to corrupt the opinion of the first; while the second could be turned merely by having had another book in the curriculum.

If we look at the “why”, which is my main target for this post, I have observed at least four main* categories over the years. In order of descending worthiness**:

*Subdividing these further is possible, but not worthwhile for my current purposes.

**Note that e.g. the question whether an opinion is correct lies in another dimension. It is quite possible to score low here and still have the (factually) right opinion; it is quite possible to score high and still have the wrong opinion.

  1. Opinions that are formed based on own thinking, analysis, observation, experimentation, …

    This typically includes e.g. the activities of many* scientists and philosophers, both professional and amateur.

    *There is no automatism, however: A good scientist should deal with this or the following item, depending on the details of the situation. Regrettably, not all scientists are good; regrettably, a disturbing portion of social scientists fall into the two last categories…

  2. Opinions that are formed through applying critical thinking to claims and reasoning by others.

    (In reality, there will almost always be some overlap with the first item. However, the first item is more likely to deal with using the ideas of others as input for own thoughts; the second with adopting (or not) the ideas of others, after own verification. The first, obviously, contains other aspects with no relation to the second.)

  3. Opinions that are uncritically taken over from a source of authority.

    Such authorities include parents, teachers, celebrities, (real or supposed) experts, books, …

    Note that the difference to the preceding item does not stem from the source (although some sources are better than others)—the main difference is the degree of own thinking and whatnot that is put into the process.

  4. Opinions that are held for reasons like peer pressure, loyalty, a wish to fit in, …

    This includes variations like “I must have the same opinions as my spouse”, “my class-mates all listen to band X; I must do so too”, “I must keep my opinions in line with my party/church/Oprah/…”, and “I must keep my opinions PC”.

    (A related case is those who merely pretend to have a certain opinions, be it for the above reasons or for fear of repercussions, e.g. being sent to a Soviet work-camp or being ostracized. However, this discussion deals with the circumstances around the actual opinions.)

In terms of “epistemological soundness”, in turn, we have to look at questions like whether plausible and logically correct reasoning has been used, whether the conclusions match the known or believed* facts, etc. Cf. the typical differentiation between “knowing” something and merely being “right”.** (I refrain from making a more explicit list, because this area is much more of a continuum.)

*There is no shame in drawing reasonable-but-not-matching-reality conclusions from incorrect premises, if those premises are correspondingly plausible. For instance, Newtonian mechanics is flawed, due to not considering relativistic effects—but it would have been unreasonable to require Newton to address this issue, considering the state of knowledge and the experimental verifiability, within what was measurable at the time, of his mechanics.

**An interesting example in my own history is my first watching of “The Phantom Menace”: I knew that princess Leia was (to be) the daughter of Anakin, I knew that Padme claimed to be sent by queen Amidala, and I just heard the very young Anakin inquire whether Padme was an angel. Factoring in the recurring theme of a prince/princess/king/whatnot pretending to be a commoner, I immediately predicted that a) Padme was actually Amidala, herself, b) she was Leia’s mother. I was highly self-congratulatory as both predictions turned out to be true—and highly annoyed to, later on, find that my reasoning still flew apart on a faulty premise: Leia was not a princess due to her mother’s title, but due to her later adopted parents’.

The willingness to change an opinion, finally, is largely another continuum between those who are willing to make constant adjustments* and those who refuse to change an opinion, no matter what. An additional complication is that a deeply ingrained opinion can take years to change, and that a willingness to be open to changes can need a long cultivation. (I have a longer, half-finished post on related topics that has been lying around a few months. I will try to complete it soon.) The issue can be generalized to how dissenting opinions are treated: Not everyone is content with merely having an opinion set in stone—many go further and actively attack/censor/slander/… those who do not agree with that opinion.

*Strictly speaking, a further division might be needed into why an opinion is changed, and my first draft actually spoke of “in light of new evidence and arguments”. At a later stage, I removed this, seeing that there can be people who are willing to change their opinions, but do so for poor reasons. Whether the openness to change and any given realized change is a good thing, well, that depends on the other points of discussion above. (For instance, in the Evolution example above, switching opinion due to a new school book claiming something different from the old is a poor reason; doing so because it also provides a better analysis or more evidence than the first book is a better reason; doing so after considerable own analysis of known facts and pro and contra arguments is a good reason.)

As an aside, there are other aspects than can be interesting in other contexts, e.g. the degree to which someone actually understands the implications of a given fact (as opposed to merely being aware of the fact it self).

Written by michaeleriksson

April 8, 2018 at 9:38 pm

The rest of Orphan Black / (Follow-up: A few more thoughts on TV series)

leave a comment »

I have now gone through the rest of “Orphan Black” (cf. a recent post)—the overall quality* was high enough to offset the unfortunate story developments. However, while I would recommend the series, it also manages to make every error in the book when it comes to the story lines. For parts of the latter seasons, I had the feeling that the makers watched to much “Lost”** in their spare time. This includes an island (usually referred to as “the island”) with evil researchers, a surprise village, and a monster running around in the woods… The introduction of a 170 years old character, as the evil master-mind, almost had me stop watching—this would have moved the introduction of (still) sci-fi level break-throughs to a ridiculously early time, and in a manner not compatible with previous impressions of the world of the series. To boot, having the evil master-mind be so old, brings nothing to the series***. Fortunately, it turned out that this supposed Methuselah had simply stolen the identity of the (long dead) original founder of his movement and had thereby exaggerated his age by a-hundred-or-so years. Another great annoyance was the entirely unnecessary introduction of some form of low-grade ESP ability in the daughter of “Sarah”.**** It added nothing to the development of events, brought no benefit, and forced the introduction of a fantasy element in a sci-fi series*****.

*Especially Maslany’s acting, but there are also quite a few other competent actors involved, the interpersonal relationships are often developed and investigated in a manner that captures the viewer, and there are a number of funny scenes (notably around “Alison” and “Helena”) that complement the darker sides of the series and increase the entertainment value considerably.

**Another series that would have been better off with less intrigue, fewer competing parties, and whatnot. The supernatural aspects were mostly a hindrance. There is so much that could have been done with just having a plane crash on a deserted island, had the makers had more courage.

***But note that this might have been different in another series or type of series, e.g. a vampire show.

****Really, what is with this obsession with giving children super powers?

*****The fewer “leaps of faith”, assumed deviations from actual reality, whatnot, that is needed in order to make a TV series (film, book, …) plausible (while achieving the intended effect) the better. Having both unrealistic technology and magic in the same work is just unnecessary: We can have flying cars through technology (“Back to the Future”) or through magic (“Harry Potter”), but having both is just silly. A good illustration is the question of languages on different planets or in different time periods: There are sci-fi series who silently assume that everyone everywhere speaks modern U.S. English (e.g. “Stargate”)—except foreigners on Earth it self… There are others who resolve the issue through some type of unrealistically strong translator (e.g. “Doctor Who”) that through some mechanism can translate virtually any language in a transparent manner, leaving the impression that everyone speaks modern U.S. English. The latter require one single unrealistic assumption; the former unrealistic assumption after unrealistic assumption after unrealistic assumption.

The series would have been far better off cutting out three-quarters of the intrigues and secret organizations, having the main target of the clones being simply finding the needed cure, and otherwise focusing mostly on characters, situations, and relationships.*

*Not because these are necessarily the most interesting or entertaining things a TV series can do—one of my current favorite series is “Ash vs Evil Dead”: No, because these are where this particular series had its strengths, and because playing to those strengths would have made it that much better. (I stress, however, that there is nothing wrong with a bit of variety: The strengths should form the bulk, but “seasoning” with someone else is perfectly fine. With “Orphan Black” too much time was wasted on a weakness.)

A particular positive thing was the extensive flashbacks in season 4 (?) that gave more background information, especially regarding “Beth” (the police-woman clone, who committed suicide at the beginning of the series first episode). More: This provided new perspectives, notably with “Beth” moving from a weak-seeming character, who caved in the face of adversity, to a heroic character, laying down her life in the protection of others.

The last episode of a series is often the hardest to make, and suboptimal results are common. With “Orphan Black” (whose last episode I watched less than an hour ago) this was so: The antagonists are defeated in an almost anticlimactic manner half-way through the episode to leave room for an extended epilogue.* This epilogue was satisfying in that closure was reached and there were happy endings (almost) all around; however, it was also too cheesy and gave me the impression of something just thrown together, rather than something carefully crafted. It also manages to throw in another unnecessary error—too many clones. With several hundred clones world-wide, the likelihood that they would have gone undiscovered is small, due to factors like the birthday problem or the Bacon number: People meet by chance, people know people who know people, people land in papers, …, and the more clones are involved, the less likely it becomes that there are no common “birthdays”. (A similar criticism can be directed at the confluence of clones in the one local area; however, here there were a number of coincidental meetings and whatnots, and it would only have been a matter of time before such coincidences would have led to public attention.)

*There is nothing wrong with an extended epilogue, per se. The problem is rather that the antagonists put up so weak a fight that a) the final showdown was hardly worth watching, b) the epilogue (in some sense) came too early. By analogy, consider an evening-filling boxing event where the concluding main fight ends with a first round knock-out.

As an aside, another area (in addition to “tabula rasa”, cf. the original post) where “Orphan Black” is potentially dangerous is the negative take on eugenics: Eugenics does not only bring opportunities, but could actually turn out to be a necessity to rescue humanity from disaster. Every time eugenics is associated solely with mad scientists (evil master-minds, Nazis, whatnot) in fiction, the prejudice in the broad masses increases and its civilized use becomes the less likely.

Written by michaeleriksson

April 6, 2018 at 2:05 am

Follow-up: My recent problems with Unitymedia

leave a comment »

The situation around Unitymedia (cf. [1], [2]) remains extremely frustrating:

  1. My support inquiry is still unanswered, despite a reminder.
  2. I still cannot use the main Internet connection of my apartment.
  3. While I am able to use the hotspot functionality as a workaround, it is (not unexpectedly) considerably slower than my real connection used to be. To boot, there are continual, highly annoying interruptions, leading to e.g. SSH sessions dying and needing a restart, and “ping”* does not work at all. Not to forget: This type of access is inherently more dangerous than the regular use, because it is easier for a hostile entity to listen in on and/or manipulate the communication.

    *Neither does e.g. “traceroute”, and I suspect that the entire ICMP is blocked, which would border on the negligent, seeing that this protocol has an import role in ensuring correctness and efficiency of Internet communications. (Blocking just specifically ping is dubious, but might be somewhat excusable due to its occasional abuse for denial-of-service attacks. For me, however, the lack of ping is a major nuisance, since I need to keep an eye on a few servers, and ping is the best way to do this; especially when reachability problems can be either a server-side problem or a connection problem, as is currently the case.)

  4. The miserable web interface of the router* works better in the newly installed Chromium than it did in Firefox; however, the situation is not satisfactory: Approximately every second attempt to run the built-in trouble-shooting results in a long wait and then an unspecified failure; every second results in a long wait and the claim that everything would now be OK—while de facto everything remains just as broken as before.

    *I note that the router is provided by and is the property of Unitymedia, with the implication that problems, malfunctions, whatnot, are Unitymedia’s responsibility—not e.g. those of an independent retailer.

Written by michaeleriksson

April 1, 2018 at 7:24 pm