Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘software

The misadventures of a prospective traveler

leave a comment »

Three issues that have collaborated to drive me nuts today:

  1. I had promised my father and step-father to try to come to Sweden over Christmas or New Year’s.

    For this purpose, I had already made several attempts to find suitable tickets with the Tor Browser (relayed through Tor or directly over my non-torified Internet connection, even with JavaScript enabled). This had proved very annoying and unproductive. For instance: The Lufthansa site simply does not load at all, it just hangs with a perpetually waiting tab. The SAS site loads, appears to work, allows me to fill in all criteria—and then does nothing when I try to submit the search. Ditto EuroWings. A meta-search that I attempted to use was hopelessly slow, insisted on (with every single search) re-including flights with a change of planes,* and also insisted on re-ex(!)cluding potentially cheaper late-night** flights. At least one site interrupted my search by having a JavaScript pop-up demand that I take a survey to improve its usability…*** Almost invariably, these sites had various annoying blend-ins/-outs, animations, overly large images, poorly structured pages, …

    *Which roughly doubles the travel time between Düsseldorf and Stockholm (that have the most suitable airports) and is highly unwanted by me.

    **A smaller negative than changing flights…

    ***First tip: Do not molest customers with such pop-ups!

    Having given up and postponed the search twice already, and now having the 28th of December, I decided to install a brand-new vanilla Firefox, in the hope that at least the SAS/EuroWings problems would be explained by some version incompatibility with either the Tor Browser (as a Firefox derivative) or my version being too low.*

    *Many web-sites, in the year 2018!, still fail to make browser-agnostic implementations, insist on very recent versions of browsers, and similar—usually with no indication to the user that they do. (And visiting an eCommerce web-site without JavaScript on is more-or-less bound to fail.)

    First attempt, SAS: Everything seemed to work, but the web-site was now even more visually annoying than before. The choosing of dates, for some reason, worked in a different manner than before and was highly counter-intuitive. Seeing that there also (not entirely unsurprisingly) were no good flights prior to the New Year and that prices were unnecessarily high, I decided to look elsewhere first.

    Second attempt, Lufthansa: Still did not load…

    Third attempt, EuroWings: To my great and positive surprise, everything appeared to work perfectly, showing me timely and much-cheaper-than-SAS flights. Things kept working as I began my purchase, entering name, address, and whatnot—and I even found the alternative to pay by invoice!* Alas: As I tried to confirm the last step, I was met with an uninformative error message and the request to start again from the very beginning. There was, in particular, no mention of factors like the last few seats on that flight having been suddenly snatched by someone else, or invoice payment not being possible on so short notice**. Before starting over from the beginning, I gave just the last page a second try. I re-entered some (inexplicably deleted) information and re-submitted. Same error message—but now followed by an intrusive pop-up suggesting that I start a chat with someone… I clicked the dismiss button—but, instead of disappearing, the pop-up did a weird and time-consuming animation and kept blocking a significant part of the page even after the animation was finished. At this point, I had had enough, closed my browser, and decided to find other means—which will likely amount to going to a physical travel agency and visiting at some point after the New Year…

    *Thereby removing one of my last doubts, namely the risk that I would be forced to pay with a combination of credit card and 3D-secure—which I (a) have never attempted with my new bank, (b) fear would involve the idiotic use of SMS (I do not currently have a cell provider), (c) had found to simply not work at all with my previous bank (the to-be-avoided Norisbank).

    **For which I would have had some sympathies–but, in that case, invoice payment should never have been offered in the first place.

    These events are the more annoying, seeing that there actually was a time when it was reasonably easy to handle tasks like these over the Internet—it really is not that hard to implement a decent search–choose–pay UI. However, year for year, the usability of various Internet shops and whatnots grows worse and worse, and appears to make more and more specific demands on the browsers. Much of this goes back to the obsession with Ajax. Credit-card payments are also not what they used to be, being much more laborious and likely to fail than in the days before 3D-secure and similar technologies. Worse, from the customers point of view, they likely lead to a net loss of security, whereas the stores and involved payment entities see the gains.* Then, if not relevant above, we have the inexcusably poor efforts of various delivery services, notably DHL, which often make it less of a fuzz to go to a store and pick up a purchase in person…

    *For the customer, the risk that someone will manage to fake a payment is reduced, but if someone does, he has very few options to prove that he was the victim of fraud. Without 3D-secure, the burden of proof was on the other party, and the customer had very little risk at all (short of additional work). The merchants and credit-card acquirers, on the other hand, can have large costs and losses when a fraudulent purchase is followed by a charge-back—and 3D-secure helps them, not the customer, by reducing this risk.

  2. Installing and setting up Firefox proved to be a PITA. Apart from the issues of the next item, I note that any version of Firefox has tended to come with very poor default settings, including default UI behavior; and that the “new” Firefox is highly reduced compared to the “old”.* After installation and prior to my attempts at finding tickets, I spent at least five minutes going through and correcting settings—that were then, obviously, only even valid for that one user account**…

    *The changes would be enough for a long own text. For now, I will just note that (a) that the GUI-configurable settings have been reduced to a fraction of their previous scope, (b) the general attitude described in e.g. [1] is continued.

    **I have a number of user accounts for different purposes, in order to reduce the risk of and damage from security breaches and whatnots. This includes separate accounts for eCommerce (the current), my professional activities, ordinary surfing, porn surfing, and WordPress.

    To boot, a new dependency was installed: libstartup-notification0. I did some brief searching as to what this is, and it appears to be just a way for an application to change the shape of the cursor during startup… (Beware that my information might be complete.) Firstly, why would I want the cursor to change?!? Secondly, even if this was seen as beneficial, it certainly is not reason enough to add yet another dependency—there already are too many useless dependencies, many of them recursive (also see portions of a text linked below).

  3. The idiotic Debian “alternatives” systems and the “desktop nonsense”.

    Disclaimer: Some familiarity with Debian or similar systems might be needed in order to understand the below.

    When a Debian user installs an application, e.g. Firefox, /usr/bin/firefox (or whatever applies) does not contain the Firefox binary—nor even a link to the Firefox binary. Instead, it links to an entry in /etc/applications, which in turn links to the actual binary (unless a certain setup has even more indirections involved). To boot, this system is administered by a poorly thought-through tool (update-alternatives) and/or configuration; to boot, it is vulnerable to applications arbitrarily overriding the status quo, as well as adding pseudo-applications (e.g. x-www-browser) that at least I simply do not want polluting my system.

    In fact, these pseudo-applications are likely the reason why this system was added in the first place—because e.g. x-www-browser can be “provided” by a thousand-and-one different real applications, it would be highly complicated to work with straight links, let alone binaries (especially when one of the “providers” is removed). For real applications, there is a much better way to solve such problems—namely, to just link e.g. /usr/bin/firefox directly to the usually sole instance of Firefox present and give the user an explicit choice of the “default” Firefox every time a new Firefox version was installed or an old removed.

    Why do I not want these pseudo-applications? Firstly, they bring me and most reasonable users, at best, a very minor benefit (for which they bring the cost of the indirections and the greater effort needed when looking for something). Secondly, the “providers” are usually sufficiently different that unexpected effects can occur.* Thirdly, they are often used by other applications in a manner that is highly unwanted: For instance, one of the alleged main benefits of x-www-browser is that any other application, e.g. an email reader, should have an easy way to open an HTML document, without having to bother to check what browsers are installed—but I absolutely, positively, and categorically do not want my email reader to even try this. In a saner world, this would be something configurable in the email reader (and only there), and those who want this endangerment can configure it, while those who do not want it simply do not configure it. By having x-www-browser, the user no longer has such control. Worse: Since the real application behind x-www-browser can change without his doing (be it due to presumptuous applications or an administrator with different preferences), the effects can be very, very different from the expected—e.g. that a known browser with JavaScript, images, and Internet access disabled (appropriate for reading e.g. HTML emails) is replaced with an unknown browser with everything enabled. (Which, in combination with email, could lead to e.g. a security intrusion, leaking of data to a hostile party, activation of unethical tracking mechanisms of who-read-an-email-when, and similar.)

    *For instance, there are many highly specific tool families, e.g. awk, whose members will superficially appear to be and behave identically (much unlike e.g. Firefox and Chrome/Chromium, as x-www-browser candidates), but will have subtle differences that can lead to a failed execution or a different-than-expected result in certain circumstances. Such problems, especially when undetected, can have very serious consequences. It is then much better for the user to, depending on circumstances, pick the specific awk-version he needs by explicit call (=> the alternatives system is not needed), make sure (for a one-user system) that he only ever has one instance installed and use the generic “awk”-name (=> the alternatives system is not needed), or restrict himself to only the common base of identical features. In the last case, the alternatives system would have some justification—however, it would place a very high burden on the user in terms of not making mistakes, might still fail due to undocumented differences or bugs, and is vulnerable to other differences, e.g. regarding performance. Obviously, this would also reduce the available capabilities of the tool in question—in many cases, quite severely.

    Similar remarks concern the “desktop nonsense” (which would deserve a long text of its own; a partial treatment is present). In this particular case, there are at least two* further mechanisms (/usr/share/applications, /usr/lib/mime/packages/) that cause similar problems, including allowing e.g. email readers to launch things that they should not launch. I have used the tool chattr to forbid additions to these two directories; however, due to the incompetence of the apt implementers and/or package builders, this is only a partial help: Despite these entries being unimportant for the actual functioning of the system/the installed application, the chattr-setting leads to a hard error from the apt-tools. I know have to “de-chattr” the directories, re-attempt the install, manually delete the added files, and “re-chattr” the directories … Effectively, I do not prevent the directories from being polluted—instead I trade an increased work-load for the benefit of knowing when I have to manually clean them up after pollution.

    *Proof-reading, I suspect that /usr/lib/mime/packages is not strictly desktop related, and might better have been treated as a third area. In the big picture, this does not matter. (And I do not have the energy to sort out “what is what” at the moment.)

Advertisements

Written by michaeleriksson

December 28, 2018 at 6:43 pm

Wordpress at it again: Backups and security through obscurity

leave a comment »

The stream of outrageous incompetence by WordPress continues…

For the first time in half an eternity,* I decided to download a backup of my WordPress blog. In the past, this has resulted in (most likely) a zip-file being offered for saving. Today, however, I was met with a message that a link to this zip-file would be sent to my email account… The link, in turn, was valid for a full seven days, downloadable by any arbitrary Internet user, and protected only by (what I hope was) a random sequence of characters added to the file name. This is not only highly user unfriendly—it is also a great example of idiots relying on “security through obscurity”: It is true that no-one who does not know the random part of the file name (obscurity) will be able to download the file (“security”). However, with the state of email security, a great number of hostiles** would have had the opportunity to grab the email contents and find the link. To boot, this approach opens the door for simple errors or oversights by WordPress to open an unnecessary security hole, e.g. if a list with the current such links is similarly weakly protected… Other risks might exist, e.g. that it might be easier for a family member or visitor to get hold of the email/link than access to the WordPress account.*** In contrast, with the old system, the backup was transient and protected by the normal user-account controls—and if those are breached, it does not matter how backups are handled…

*This is not as bad as it sounds: I write all posts offline, with separate backups, in the first place; there are not that many comments; and I intended to leave WordPress at some point anyhow. Correspondingly, little data would actually be lost, if something bad happened.

**Notably, not necessarily parties hostile towards the individual blogger. More likely, it would be someone hostile towards WordPress or who sees WordPress as an easy source of data. Such a hostile would then watch the outgoing traffic from the WordPress mail-servers, grab all the links it could find, and then simply download everything. And, yes, many blogs will contain contents that are not intended for public viewing, including private blogs, blogs restricted to a smaller circle, and public blogs with unpublished drafts.

***Anonymous bloggers are not necessarily known to even a closer circle and even those who are might have contents not yet suitable for viewing by others. (This need not even relate to something truly secret, which would be foolish in the extreme to put on WordPress in any manner, but could include e.g. a draft of a post dealing with an upcoming proposal, surprise party, whatnot.)

If we consider only the delay, there might be see some justification to accomodate extremely large blogs, where there is at least a possibility that the time needed* for the creation of the backup might be too large for normal in-browser interaction. However, if so, the correct solution would be to present the download only within the account it self. Indeed, even if we assume that this type of linking was acceptable (it is not), the procedure is highly suboptimal: The link should have been presented in the confirmation page, not sent per email;** the availability time should have been far shorter (a day?); and the contents should have been deleted or otherwise made unavailable upon download (if something goes wrong, the user can always create a new backup).

*I received the email almost instantaneously, implying that my backup would have had to be at least one, more likely several, orders of magnitude slower than it actually was before this concern became legitimate.

**Or the contents behind an emailed link be password protected, with the password displayed in the confirmation page; or the contents only being served after a successful WordPress login.

Written by michaeleriksson

September 28, 2018 at 7:36 pm

Problems with adduser

leave a comment »

Doing some light administration, I stumbled upon a few idiocies in the Linux tool “adduser”* and the associated config-file /etc/adduser.conf on my Debian** system.

*Used, unsurprisingly, to add new users to the system.

**Who is to blame for what is not clear. As I have grown increasingly aware over the last few years, the Debian developers make quite a few poor decisions and changes to various tools and defaults that do not correspond to the wishes of the original tool-makers or that violate common sense and/or established “best practices”.

  1. This tool is the source of the “skeleton” files* added during user creation (well, I know that already) and contains a config variable for the corresponding location (SKEL)—but provides no obvious way of turning this idiocy of entirely. (Although a secondary config variable for blacklisting some files, SKEL_IGNORE_REGEX, can probably be (ab-)used for this purpose; the better way is likely to just keep the directory empty.)

    *Presumably so called because they provide a metaphorical skeleton for the users home directory. Note that there are other mechanisms that create unwanted contents in home directories. (One example is discussed in an earlier post.)

    Why is this an idiocy? Well, while their might be some acceptable use case for this, the typical use is to fill the respective users home directory with certain default configurations. This, however, simply does not make sense: Configuration settings should either be common, in which case they belong in global files, not in individual per-user copies; or they should be individual, and then the individual user should make the corresponding settings manually. In neither case does it make sense to copy files automatically.

    Indeed, in the former case, the users and administrators are put out, because (a) the skeleton files must be kept synchronized with the global configuration files (to boot in an unexpected and counter-intuitive manner), (b) the users get a snap-shot of the configuration—the configuration as it was at the time of user creation, without any changes that were made later.

    In the latter case, a user who wants his own config file can simply copy the global configuration file manually should he at all want it.*

    *Experienced users tend to not want anyone elses preferences in their config files, and have often made too extensive changes or are set on a certain set of values they have used for years, implying that they will likely not want the global files to begin with.

    Now, one comparatively rare case where skeleton files could make sense, is when setting up several sets of users that have different characteristics (e.g. administrators, Java developers, C developers)—but that will not work with this mechanism, because the skeleton files are common for all users. In order to get this to work, one would have to provide an entire new config file or play around with command-line settings—and then it is easier to just create the users without skeleton files and then copy the right files to the right users in a secondary step.

    As an aside, I do not recommend the sometime strategy of having a user-specific config file call a global config file to include settings from there (as might have been a workaround in the case of skeleton files): This tends to lead to confusion and unexpected effects, especially when the same user-specific config file is used on several systems (e.g. a work and a home computer), when global config changes, or when something is done in an inconsistent order. Instead, I recommend the individual user to either use only the global file or only his own.

  2. When the home directory is created, the typical access-control defaults (“umask”) are ignored* in favor of the config variable DIR_MODE—and this variable has the idiotic and inexcusable value 0755**. In other words, home directories are created in such a manner that everyone can read*** the directory per default. It is true that this will not give the rights to read the contents of files****, but being able to see file names can be bad enough in it self: Consider e.g. if the wrong person sees names like “resignation_letter”, “proposal_plan”, “porn”, “how to make a bomb”, …

    *Such duplication of responsibility makes it harder to keep security tight, especially since the admins simply cannot know about all such loopholes and complications—if in doubt because they change over time or can vary from Unix-like system to Unix-like system.

    **The best default value is 0700, i.e. “only the owner can read”; in some cases, 0750, i.e. “owner and group members can read”, might be an acceptable alternative.

    ***To be more specific, list the directory contents and navigate the directory. (Or something very close to this: The semantic of these values with regard to directories are a bit confusing.)

    ****Files (and sub-directories) have their own access rights that do respect the value of the umask (at their respective creation).

  3. The format of a user name underlies restrictions by the configuration variable NAME_REGEX. Unfortunately, this variable only appears to add restrictions. Quoting the documentation:

    adduser and addgroup enforce conformity to IEEE Std 1003.1-2001, which allows only the following characters to appear in group and user names: letters, digits, underscores, periods, at signs (@) and dashes. The name may no start with a dash. The “$” sign is allowed at the end of usernames (to conform to samba).

    An additional check can be adjusted via the configuration parameter NAME_REGEX to enforce a local policy.

    This is unacceptable: Unix-like systems typically accept almost any character in a username, and what name schemes or restrictions are applied should be a local decision—not that of some toolmaker.

    For reasons of interoperability through a “least common divisor”, it makes great sense to apply some set of restrictions per default; however, these restrictions must be overrideable and should have been integrated in NAME_REGEX (or a second, parallel variable).

    As an aside, I am quite surprised that “@” is allowed per default, seeing that this character is often used to connect a user with e.g. a domain or server name (as with email addresses). When the user name, it self, can contain an “@” it becomes impossible to tell for certain whether “X@Y.Z” is a user name (“X@Y.Z”) or whether it is a user name (“X”) combined with a server or domain (“Y.Z”). In the spirit of the aforementioned “least common divisor”, I would not only have expected this to be forbidden—but to be one of the first and most obvious things to be forbidden. (I would speculate that there is some legacy issue that requires that “@” remains allowed.)

Written by michaeleriksson

July 3, 2018 at 4:44 am

XDG, lack of respect for users, and bad design

with 2 comments

Normally and historically, using Linux has been much more pleasant than using MS-Windows, at least for those who stay away from KDE, Gnome, et co. Unfortunately, there has long been a negative trend towards worse usability, greater disregard for the user’s right to control his own computer, etc. (See e.g. [1] or [2] for similar discussions.)

Today, after doing some experiments with various X* and WM setups, I found something utterly inexcusable, a truly Microsoftian disregard for the user’s interests:

*By which I refer to the window system named “X”—unlike many other instances where I have used it as a placeholder.

Somehow, a tool xdg-user-dir had been activated for the first time during my experiments (whether this tools was installed per default or has snuck its way in during my experiments, I do not know)—and promptly created a slew of directories in one of my user accounts. There was no inquiry whether I wanted them created, they were just silently created. To boot, they were exactly the type of directories that I do not want and that I have always deliberately deleted when they were present on installation: Random directories like “Desktop”, “Pictures”, “Music” bring me no value* whatsoever and do not in any way, shape, or form match my computer use—starting with the fact that I use different user accounts for different tasks and order my files and whatnot according to entirely different criteria**. They clutter my file system—nothing more, nothing less.

*Note that, unlike with MS Windows, these are not necessary for the OS not to throw a fit. (Although it is conceivable that one of the worse desktop environments would—but then I do not use these!)

**The exact criteria vary, but the most common is “by topic”, rather than “by type”. For instance, a video and an eBook on the same topic would land in the same directory; two eBooks on different topics would land in different directories.

Having a look at the documentation, the functioning of this tool appears to be fundamentally flawed, even apart from its presumptuousness: In my reading, the directories would simply have been created again, and again, and again, had I just deleted them. This too is inexcusable—a manually deleted directory should stay deleted unless there are very strong reasons speaking to the contrary*/**. Now, I have the computer knowledge and sufficient drive that I actually could research and solve this issue, through finding out what idiotic program had added the directories and make sure that a repetition did not take place (and do so before my other user accounts were similarly molested)—but this is still a chunk of my time that I lost for absolutely no good reason, just because some idiot somewhere decided that he knew better than I did what directories I should want. For many others, this would not have been an option—they would have deleted the directories, seen them recreated a while later, deleted them again, …, until they either gave up with a polluted user account or screamed in frustration.

*Such reasons would typically involve a directory or file that is needed for practical operations while not being intrusive to the user (almost always implying a “dot” or “hidden” file). Even here, however, querying the user for permission will often be the right thing to do. To boot, it is rarely the case that an application actually needs to create anything in the user account, when not explicitly told to do so (e.g. through a “save” action by the user). Off the top of my head, I can only think of history files. For instance, creating a config file is only needed when the user actually changes something from the default config (and if he wants his config persistent, he should expect a file creation); before that it might be a convenience for the application, but nothing more. Temporary files should be created in the temp-directory, which is in a central place (e.g. /tmp) per default (and should the user have changed it, there is an obvious implicit consent to creation). Caching is a nice-to-have and would also often take place in a central location. Indeed, caching is an area where I would call for great caution and user consent both because of the potential privacy issues involved and because caching can take up quite a bit of storage space that the user might not be aware of. Finally, should the user wish to save something, it is up to him where to save it—he must not be restricted to an application specific directory. All-in-all, most cases where an application “needs” a directory or file will be pseudo-needs and signs of poor design.

**Looking specifically at the directories from above, I note that they are not hidden—on the contrary, they are extremely visible. Further, that they almost certainly are intended for one or both of two purposes: Firstly, to prescribe the user where he should put his this-and-that, something which is entirely unacceptable and amateurish—this is, remains, and must be the user’s own decision. Secondly, to ensure that various applications can rely on these directories being present, which is also entirely unacceptable and amateurish: Applications should not make such assumptions and should be capable of doing their job irrespective of the presence of such directories. On the outside, they can inquire whether a missing directory should be created—and if the offer is turned down, the applications still need to work. If they, unwisely, rely on the existence of, say, a picture directory at all, it should also be something configurable. They must not assume that it is called “Pictures”, because the user might prefer to use “Images” and already have other tools using that directory; similarly, they must not assume that the directory, irrespective of name, is in a given position in the file system, because the user might prefer e.g. “~/Media/Pictures” over “~/Pictures”; he might even have put all his pictures on a USB-drive or a server, potentially resulting in entirely different paths.

Looking up XDG on the web gives a negative impression. For instance, the Wikipedia page has a list of hosted packages/projects, some of which are among my least favorite, because they are e.g. redundant, do more harm than good, or replace superior solutions, including D-Bus, PulseAudio, systemd, and Poppler*. To boot, they often violate the design principles that used to make Unix and its derivatives great. Some others leave me ambivalent, notably Wayland: Even assuming that X needs replacement or improvement**, Wayland seems to be the wrong way to go through reducing both flexibility and separation of concerns.***

*At least with regard to how it has screwed up xpdf.

**It quite probably does, about three decades (!) after the last major specification release. However, in my own use, I cannot think of anything major that bothers me.

***With the reservation that I have not read anything on Wayland in years and am not aware of the latest state: Because I am not a Ubuntu user, I have not been forced to a practical exposure.

Looking at the “Stated aims” further down the page, most are good or neutral (with some reservations for interpretation); however, “Promote X desktops and X desktop standards to application authors, both commercial and volunteer;” is horribly wrong: Promoting such standards to desktop authors is OK, but not to application authors. Doing the latter leads to unnecessary dependencies, creates negative side-effects (like the unwanted directories above), and risks us landing in a situation where the system might need a desktop to function—not just a window manager or just a terminal.

For instance, I have now tried to uninstall everything with “xdg” in its name. This has left me with “xdg-utils” irremovable: If I do uninstall it, a number of other packages will go with it, including various TeX and LaTeX tools that have nothing to do with a desktop. In fact, there is a fair chance that they are all strictly command-line tools…

I have also searched for instances of “freedesktop” (a newer name, cf. Wikipedia). Trying to uninstall “hicolor-icon-theme” (admittedly not something likely to be a source of problems) leads to a request to also uninstall several dozen packages, many (all?) of which should reasonably still work without an external icon theme. By all means, if icons can be re-used between applications, try to do so; however, there must be sufficient basic “iconity” present for a good program to work anyway—or the programs must work sufficiently well without icons to begin with. Indeed, several of these are either command-line tools (that should not rely on icons in the first place) or make no or only minimal use of icons (e.g. pqiv and uzbl).

Worse, chances are that a considerable portion of these tools only have an indirect dependency: They do not necessarily need “hicolor-icon-theme” (or “xdg-utils”). Instead they rely on something else, e.g. a library that has this dependency. Here we can clearly see the danger of having too many dependencies (and writing too large libraries)—tools that do not need a certain functionality, library, whatnot, still need to have it installed in order to function. This leads to system bloat, greater security risks, and quite possibly diminished performance. Unfortunately, for every year that goes by, this problem appears to grow worse and worse.

Forcing XDG functionality and whatnot into applications that do not actually need them is a bad thing.

(Similarly, a great deal of my skepticism against D-Bus arises from the fact that it is “needed” by every second application, but is rarely actually used for something sensible. In the vast majority of cases, the application would have been just as good without D-Bus, but it still has a dependency and it still presumes to start an unwanted D-Bus at will—which it typically does not clean up afterwards…)

Written by michaeleriksson

May 8, 2018 at 1:31 pm

Follow-up: Linux vs. GNU/Linux

with 6 comments

In light of a lengthy reply by a user codeinfig to an earlier post on the issue of “Linux” vs. “GNU/Linux”, I revisit this topic.

This in two parts: An extension of the original discussion (partially driven by the reply, but mostly held abstract) and a more specific rebuttal of said reply (formulated in terms of a direct answer).

General discussion:

  1. At the time of my original post, I actually was not aware of the amount of controversy that surrounded this issue, mostly seeing Stallman’s position as an example of flawed thinking, and my intention was (like in much of my previous writings) to point to such flaws. (Possibly, because commercial Unixes dominated my experiences of the Unix-like OSes until the early new millennium.)

    With hindsight, it was highly naive of me to not expect the topic to be “hotter”: This is the Internet, and more or less any question that could cause controversy will be discussed/argued/flame-warred at length—even be it something so trivial seeming as a name. To boot, this issue appears to be almost as old as Linux, giving it plenty of time to have been discussed.

  2. I stress again that I do not claim that “Linux” is an appropriate name when we do not speak of the kernel (cf. previous statements). However, “GNU/Linux” does not solve the involved problem. On the contrary, it compounds it further, because arguments against using “Linux” are the stronger against “GNU/Linux”. (However, this might very well have been different in the 1990s.)
  3. “GNU”, on its own, has at least three different meanings: The OS envisioned by Stallman, the GNU project, and the GNU set of tools/programs. Of these, I would consider the first the least relevant today, because this OS has simply not materialized in its full intended form, even after several decades, and I honestly cannot recall when I last heard that meaning used prior to this discussion. Even as early as 1994, when I started college and made my first contacts with Unix (specifically SunOS), the tools were arguably more important than the OS: The default setup after login consisted of one instance of Bash (running in an Xterm) and one instance of GNU Emacs; the main computer work for the first semester consisted of writing and executing LISP programs using Emacs—even under a commercial Unix version, with its own set of pre-existing editors, shells, whatnots, GNU tools had been preferred. An alternate editor and semi-imitation of Emacs, MG, was typically referred to as “Micro-GNU”*, showing the relative importance of Emacs within the GNU sphere at the time.

    *The actual name was “Micro-GNU-Emacs”, with the intended focus on “Emacs”, with “GNU” only serving to avoid confusion with other (less popular) variations of Emacs. (A distinction that hardly anyone bothers with today, “Emacs” being used quasi-synonymously with “GNU Emacs”, just like “Windows” usually contains an unspoken “Microsoft”.) However, so dominant was Emacs in the perception of GNU that most people shortened the wrong component of the name…

    But, by all means, let us go with the OS-meaning*: We say “GNU” and mean an OS. Even now, however, the use by codeinfig does not make sense. He appears to use “GNU” (resp. “gnu”) as an all-encompassing term for the OS in a very wide sense** or even the whole system, effectively saying that not only is e.g. a Debian system “GNU/Linux”—it is actually “GNU”… This goes towards the absurd, because even when we speak of “GNU” as an OS, the possible interpretations are 1) the whole original vision by Stallman, i.e. “GNU/HURD” and 2) just the “GNU” part of “GNU/HURD” (resp. “GNU/kernel-of-your-choice”). If we take the first, Debian is only even a GNU candidate when the HURD kernel is used (which will not be the case when we have a Linux-version of Debian) and speaking of just “GNU” in the manner of codeinfig is clearly wrong; even speaking of “GNU/Linux” would be clearly wrong a priori. If we take the second, “GNU/Linux” would still be conceivable (before looking at other aspects of the issue), but the equation GNU = GNU/Linux would be obviously incorrect.

    *For simplicity of discussion I will try to stick to this meaning in the rest of the text, where the difference between the three matter and where the right choice is not clear from context. Note that earlier references do not necessarily do so.

    **An annoying problem in this situation is that it is very hard to define the border between OS and application in such a manner that everyone is happy and all circumstances are covered. A more fine-grained terminology would be beneficial, just like dividing the year into winter and summer would be simplistic. (However, this is secondary to the current discussion.)

  4. If we do use the OS meaning, then, yes, I would consider GNU mostly irrelevant today. It is of historical importance and it might very well grow important again, but today it is dwarfed by Linux, various BSD variants (arguably including MacOS), and possibly even the likes of OpenSolaris and its derivatives. And, no, this OS is not what e.g. I have running right now.

    On the other hand, the GNU tools/programs and the GNU project are highly relevant and immensely valuable to the world of non-commercial/-proprietary computing.

  5. GNU/Linux systems are certainly conceivable: Take GNU (in the sense of an OS without the kernel) and add Linux as a kernel. Such systems might even be present today. A typical Linux-kerneled* distribution, however, is simply not an example of this.

    *See what I did there!

  6. Some seem to think that “because system A uses GNU components it is GNU” or “[…] it should use GNU in its name”. This line of reasoning does not hold up: It is simply not practical to mention every aspect of a system (be it in IT, Formula One racing, or house building), and GNU does not today play so large a part that it warrants special treatment over all other aspects, including e.g the X server and associated tools or the desktop. Again, this might have been different in the 1990s, but today is today. Cf. my first post.

    Notably, any even semi-typical Linux-kerneled system of today runs a great variety of software from a number of sources, and limitations in naming like “Linux”, “GNU/Linux”, whatnot, simply make little sense. Let a user name his five or ten most used “regular” applications and his desktop or window manager (depending on what is central to him), and we know something about his system, his needs, and his user experience. For most users, the rest is just an invisible implementation detail. Hell, many only use even the command line as a last resort… (To their own major loss.)

    That GNU possibly was the first major attempt at a free or open-source OS is not relevant either. Consider by analogy Project Gutenberg: Its founder claims* to be the first to think of the concept of eBooks: Should any party dealing with eBooks be forced to include “Gutenberg” in its name, resulting e.g. in the “Gutenberg/Tinder” reader? Or should ordinary book publishers be forced to refer to the original Gutenberg, for using printing presses? No—both notions are absurd. They might deserve to be honored for early accomplishments and, certainly, someone might chose to voluntarily name something in honor (as Project Gutenberg did with the original Gutenberg)—but no obligation can conceivably be present.

    *I very much doubt that this is true. Yes, his idea goes back to, IIRC, the early 1970s or late 1960s, but even back then it cannot have been something entirely unthought of, be it as a vision for the (real) future or as sci-fi. Vannevar Bush published ideas several decades earlier that at least go somewhat in the same direction.

  7. Some arguments appear to go back to a variation of moral superiority, as with Stallman’s arguments (also linked to in my original post) or codeinfig’s below. Notably: Linux is not free (in the sense of free software etc.) enough/does not prioritize freeness enough; ergo, GNU is morally superior and should be given precedence. This too is a complete non sequitur that would lead to absurd consequences, especially because the different parties have different priorities for deliberate reasons.

    Someone who does share GNU’s priorities might, for all I care, chose to voluntarily include GNU as a part of the name of this-or-that. However, no obligations can exist and those who do not share said priorities have absolutely no reason to follow suit. More: It would be hare-brained if they did…

As for the more specific reply*, I start by noting that there are clear signs that you** have not understood (or misrepresent) what I am actually saying, and that it is hard to find a consistent line of reasoning in your text (your language does not help either). If you want another iteration of the discussion, I ask you to pay more attention to these areas.

*I have left some minor parts of the original reply out. There can be some changes in typography and formatting, for technical reasons. I have tried to keep the text otherwise identical, but it is conceivable that I have slipped somewhere during editing or spell-checking—always a danger when quoting extensively. The full original text is present under the link above, barring any later changes by codeinfig.

**I address codeinfig directly from here on.

“whether the emphasis is on GNU alone or GNU and HURD in combination matters little for the overall argument.”

it covers more of the argument than you realise, and it is the flaw in your argument.

you are making gnu out to be a tiny subset of what it is, and making it less significant (details aside, you are greatly diminishing what it is) so that it pales next to “linux.”

this is unfair for several reasons— first, you do not understand what gnu is. you think gnu is just some software that isnt useful to the (everyday) user. its a misrepresentation that would lend at least some weight to your argument, if it werent a misrepresentation.

The “GNU” vs “GNU/HURD” distinction only makes sense if we abuse “GNU” in the manner I have dealt with above. The rest is largely a distortion of what I say. In particular, I have never claimed that GNU would pale to Linux (in the kernel sense)—I claim that it pales in comparison to the overall systems. (Which really should be indisputable.) If you re-read my original post, you will find that I clearly point to uses of GNU tools that are not obvious to the end user; however, the simple truth remains that for someone who does not live on the command line or in Emacs, the overall importance of GNU is not so large that it deserves special treatment over some other parties.

You do not seem to understand how many different components of various types and from different sources go into building e.g. a Debian system (be it as a whole or as the OS), with many of them present or not present depending on the exact setup. We simply do not have anything even remotely close to an almost-just-GNU system with Linux dropped in in lieu of HURD, which seems to be your premise.

gnu was “the whole thing” before linux was a kernel. the web browser is not “linux” either, it is a browser. but we call it “linux.” xwindows predates “linux” by nearly a decade, but we call it “linux.”

when we call these things “gnu” you fail to understand that *that is what it was called already* and “linux” is no more a web browser than gnu was at the time, but somehow its ok for linux to presume itself to be all those things, but its “riding coattails” if gnu helps itself by being included.

Here we have several misrepresentations, e.g. the claim that the browser would be called “Linux”—this simply is not the case. Neither were those things already called “gnu” in the past. Notably, in the time before Linux-kerneled systems broke through, the clear majority of users were running commercial Unixes, e.g. SunOS, with GNU either absent or represented through a few highly specific tools, e.g. Emacs. While it is true that GNU was conceived as “the whole thing” (by the standards at its conception), this does not imply that it actually is “the whole thing” when it is included in a greater context and at a much later date. By analogy, if someone launches his own car company A, and another car company B, thirty years later, uses parts delivered from A, other parts from other companies, and parts that it has produced on its own, should B’s products then be referred to as “A” or as “B”? Obviously: “B”. In addition, due to the absence of HURD there is no point of time prior to Linux where GNU actually, even temporarily, was “the whole thing”, making the claims of precedence the weaker.

Note the item on historical influence above.

Note that my original formulation concerning coat-tails, a) referred explicitly to “better-known-among-the-broad-masses”, which is indisputably true and makes no implication concerning e.g. practical importance, b) was used to demarcate the outer end of the spectrum of interpretations of the situation—I never say that Stallman’s intent is to ride the coat-tails of Linux, only that this is the worst case interpretation.

the whole idea that linux is entitled to do this but gnu is not is special pleading *all over the place.*

its special pleading that strawmans the heck out of what gnu is in the first place— with a generous “side” of ad hom for why stallman thinks we should call it that. oh, its his quasireligious views…

No such special pleading takes place. I clearly say that “Linux” is a misnomer—but that “GNU/Linux” is a worse misnomer through compounding the error.

Watch your own strawmanning!

no, his arguments are not quasireligious. they are philosophical and even practical. thats an ad hom attack, and the only thing religion has to do with it is in parody (and other related ad hom from critics.)

If you actually read what he says, you will find that he is quite often religious/ideological and lacking in pragmaticism: He has an idea, this idea is the divine truth, and thou shalt have no other truth. Watch his writing on free this-and-that in particular.

i suspect that at some point (to be fair, you havent yet) you will accuse me of being some kind of stallman devotee. i was an open source advocate first, but i switched to free software after years of comparing the arguments between them. open source is a corporate cult, partly denounced by one of its own founders.

i switched to free software because it lacks the same penchant for rewriting history, for splitting off and then accusing those who didnt follow of “not being a team player,” and basically is more intellectually honest than open source and “linux.”

but its like an open source rite of passage to nitpick about “gnu/linux,” and it tends to follow a formula. you left out the part about how “free” is a confusing word with multiple meanings— sort of like “apple computer.”

I have not yet, and I will not here either—and it would not matter if I did: Your arguments remain the same irrespective of whether you are a devotee or not.

As for your motivations to prefer free over open: They have no relevance to the naming issue.

“To the best of my knowledge, no-one, Stallman included, has suggested that we refer to GNU (!) as GNU/Linux.”

in most instances he does. your definition of gnu is in fact, partial and subset, so he has never suggested we refer to that subset by anything. i dont believe he has ever referenced the subset you call “gnu” at all.

See the general discussion above and why this does not make sense. (But I do not rule out that Stallman too can have said something that does not make sense.)

” `The question is rather whether a Linux (sensu lato) system should be referred to as GNU/Linux.’ “

no, thats a loaded question, and a fine ingredient for a circular argument. “its already called linux.” well, it was already called gnu. but again, *somehow* linux is entitled to do that and gnu isnt, even though gnu was already calling it that.

for stallman and many others who have not been swayed by over a decade of these “dont bother calling it gnu” articles, the question is whether gnu should be referred to as only “linux.”

It was never called GNU and even if it were, you cannot demand that others, building new products, where GNU is a subset, propagate the name ad eternam. Cf. above.

the answer to that question is cultural, and already explained— *if you care about software freedom* then gnu is a signifier. to a programmer this makes sense— its self-documenting.

from a marketing perspective, this is ridiculous. to a “linux” fan (to torvalds himself) this is riduculous. to me, its a *lot* more honest. however, what stallman has done is establish a brand that shows something living up to a promise.

“gnu” is quality control (a brand) for user rights. and linux really isnt. it really really isnt, but why it isnt is a separate debate. im not trying to write you an oreilly book here.

so again— if you want to signify user freedom, call it gnu/linux. (if it were up to me, the /linux would be dropped, stallman was trying to be fair.) if you want to signify whatever the heck “linux” stands for, call it whatever you want. i call fig os “fig os,” but fig os is a gnu/linux distro.

This is the flawed moral argument discussed above. In addition, why should e.g. Torvalds include “GNU” to stress free software when free software is not his priority? If he does not include it, how can I (you, Stallman, …) presume to alter the name based on having another priority?

As an aside, if the reasoning went in the other direction, i.e. “You are not free enough to use our name, so stop using it”, this would make a lot more sense. (Assuming that someone sufficiently non-free did use “GNU”.)

“Not at all: A world-view in which GNU is so important that it would warrant top-billing in the context of a Linux system is outdated—not GNU it self.”

thank you for clarifying, but you have not explained why gnu is not important enough to warrant top-billing, except to say that applications are more important to users who dont know why gnu is important.

Again: There are many components from different sources that make up a system. GNU is just one of them. (If you feel that GNU truly outweighs all others to such a degree that it warrants top-billing, it is up to you to prove this.) Further: What is important to the user is what matters in the end. If, by analogy, Bash or Emacs behaves the same, but is now implemented in Java or C#, it remains Bash resp. Emacs. The developers might be in understandable tears, but the world goes on. Implementations are fungible to the users—the result of the implementation is not. Hell, when I use Vim, Bash, and Tmux* under Cygwin, I have almost the same experience as when working under Debian, even when actually on a Windows machine… Even speaking of “Windows user” and “Linux [or whatnot] user” makes a lot less sense today than it did in the past, and it is often more sensible to speak of e.g. “Bash user” and “Vim user”.

*Note that of the three, only one is a GNU program.

your article is mostly assertions, and you do start to explain some of them though i still think it rests mostly on ad hom and assertion. its a very common set of assertions too— made year after year after year, i even made them myself once long ago.

Ad hominem and assertions pretty much match what I see in your writing…

“I am saying that building e.g. a Debian system without GNU is conceivable.”

thats hardly fair. gnu has been vital to all this for a quarter of a century (bsd can make a similar argument, considering that the only reason gnu was necessary was they were tied up in gaining the rights to their own work.)

I doubt that GNU has been that important for a whole quarter of a century, but even if it has been, that is irrelevant: It is not that important now. This is not a matter of fairness. Light-bulbs were great; today, they have been replaced by LEDs and other newer technologies. (Be it for technical reasons or through legislation.) I do not look up at my ceiling lamp and say a quick prayer to Edison (or one of the other inventors involved in the development of light-bulbs).

you make gnu less essential by creating a strawman version of what it is, so that you can say “but this isnt a good enough reason to warrant top-billing.”

we cant agree on the validity of your argument if you insist on misrepresenting what you weigh the importance of.

in fact, your argument suggests to me that the name is more important than the thing itself. i mean— gnu wouldnt be essential if this mostly-hypothetical thing like gnu were created instead! but thats no reason to call my entire operating system:

gnu! linux!

stallman has said lots of times that the name really isnt important. seems to really go against this whole thing, eh? people miss his asterisk where he says its important that everything gnu exists to accomplish is not forgotten for a side movement that reframes years of work to deliver freedom to the user as “just a practical way to develop software.”

Most of the above is more of you misrepresenting (or misunderstanding) what I am actually saying. As for the name vs. the thing: The topic of my post is the name and flawed reasoning around the name; ergo, I deal with the name and the flawed reasoning.

open source creates the need for this, stallman says “well if youre going to misrepresent everything we do, at least give us three letters of credit for this enormous amount of software youre relying on.” and people say more or less: “wow, the nerve of THIS GUY!”

its so funny because all he wants is for people to not forget that the entire point of all this was free software.

His entire point was free software. Torwalds’ (and many others’) is not. And again: If GNU was given credit in the name, then there are other parties with a similar right.

your argument is whether it should be called gnu/linux or not. but it never addresses the years-old argument of why it should be called gnu— it makes up its own reason, and then steps on it.

it really is a giant strawman. and i appreciate that you are almost certainly sincere and wouldnt create a strawman just to be a jerk. but its still a strawman.

Your claims make no sense, unless you truly are under the misapprehension that a typical systems consists of the kernel, various GNU components, and few trivial other bits. This is very, very far from the truth.

“GNU GPL, however, is irrelevant for the functioning, it would be easy to replicate something similar,”

haha— it isnt irrelevant at all, ask torvalds if it is. it isnt easy to replicate something similar either, and its less easy to get people to use such a thing. pulling off copyleft (when a billion dollar corporation was heavily dedicated to defeating it) was a serious coup. youre making it out to be a bunch of words in a file.

go make a gnu gpl— go ahead. show me. have anybody you can find to help in on it, too. have someone show me how easy it is.

“there are other available licenses. Notably, such other licenses, e.g. something in the Apache or BSD families, are often preferred by people outside GNU, because these parties have other priorities than free software.”

and thats the thing. they dont do what the gpl does, they dont achieve what the gpl does, but you consider them replacements. its apples and oranges.

Firstly, you assume (again) that copy-left, free this-and-that, whatnot is the priority of everyone. It might be your and Stallman’s priority, but it simply is not a global priority—and it is not needed to build e.g. an open-source computer system. Linux, Debian, …, does not need the GPL to exist.

Now, if someone does want a copy-left license? Firstly, there are other copy-left licenses around, if possibly with a somewhat different coverage. Secondly, combining another existing license with aspects of the GPL and/or with a few days research by a lawyer should yield something quite passable. (True, there might be few issue to sort out over time, e.g. due to ambiguities or complications with different jurisdictions, but not something that would require several decades to build.)

“GCC is mostly important for building the system, not for using it.”

special pleading, all over the place.

Not in the least: How would you justify include the compiler used to build the system as a component of the system? (Except for those proportionally rare cases when it is actually used to compile other programs when later using the system.) See also below.

“In fact, even now, many build setups contain explicit checks for the presence of GCC and automatically fallback to CC, should GCC be absent.”

and this is a bit of trivia, because all this stuff we have now would not exist (and would not be maintained) if everyone had to use cc. its like you know the significance of gcc but choose to ignore it when its convenient to your argument.

open source would not exist without gcc. linux might, but not the linux we have today. some little usenet gem that wasnt developed by half as many people— because they needed gcc to do it.

That is a very far-going claim. Can you back it up? I doubt it. In the case of open source in general, it is definitely incorrect, as can be seen by the many projects that use e.g. Java instead of C… Also keep in mind that in the absence of GCC, someone is likely to have started to improve CC or to build a more suitable compiler as need arose. Consider e.g. how GIT came into being; note that GCC as it is today is a very different beast from what it was when Torvalds started his work; note that GCC contains much that is not needed for Linux in the first place (e.g. unused languages) or is not essential for the existence of Linux (e.g. compilation for architectures outside the of the main-stream PC processors).

why even mention cc when you could have talked about clang instead? because i already addressed that when i talked about bsd, and because the part about cc is hypothetical (and at best, unlikely.)

One of several claims that make poor sense even on the sentence level. Besides: When did you address CC? Why would what I say about CC be unlikely?

As for Clang, I was not aware of it until now (but did consider mentioning LLVM in addition to CC, or the possibility of having started the original development with even a non-Unix compiler). However, its existence proves my point: Even if CC would not be a realistic replacement for GCC today, another tool definitely is. And: Other tools tools capable of filling the role of GCC have been possible at any point.

“glibc is possibly the most deeply ingrained dependency (and a better example than my original GRUB); however, this is still just one library.”

and linux is just one kernel. so what? its a monolithic kernel, and glibc is a monolithic library. theyre both enormous. you cant make them smaller by counting units, thats absurd.

glibc is far smaller and the kernel, by its very character, is the core of an OS. (glibc is not even the largest individual library—quite far from it, actually.) What you mean by “counting units” is not clear to me. The relevance of whether they are monolithic or not is lost on me.

“Here too we have the situation that glibc is not used because it is the only alternative, just the best.”

so once again, we shouldnt call it “gnu/linux” because gnu is just a bunch of vital components that arent vital because you could easily replace them with a bunch of drastically inferior alternatives that no one actually wants.

hmpf. yes, im taking some liberties with my version of your argument, but only to try to get its author to appreciate how much of a stretch it is.

“As with GCC, its absense would simply have led to something else being used.”

so *hypothetically*, gnu doesnt deserve top billing. because it could be less important than it is, if it werent.

You miss my point: That if we look at the situation as it is and say “part X is important today; ergo, if part X had never existed, the whole would not exist”, we ignore both the possibility of a replacement that would still have made the whole viable and the considerably likelihood that something else would have evolved over time to fill the same role or that the role would have been covered in a different manner. If we look at the situation today and see that just removing e.g. glibc would cause a given system to fail catastrophically, we cannot conclude that the system would not have existed had glibc not been present in (hypothetically) 1990—and therefore we cannot conclude that the existence of the system is contingent on glibc and, by implication, GNU. As a consequence, when you say “i said without gnu. no gnu gpl, no gcc, no glibc. you go right ahead, since gnu is irrelevant now. remove it, and find out what you get”, the answer is “without GPL, GCC, and glibc, we would see something that is recognizably approximately what we have today”. We might have ended up with an king penguin instead of an emperor penguin, but we are still talking penguins. Now, a scenario that removes GNU entirely from the early Linux development, that could have been a very major problem—but that does not imply that Linux and/or Linux-kerneled systems cannot exist without GNU today or that e.g. glibc is so central that they would never have come into existence without it.

“Even now, keeping the interface intact and replacing the implementation with a non-GNU variation would be technically feasible.”

but then, why should we rewrite glibc just to deny gnu the billing it allegedly doesnt deserve now?

That is not what I suggest: The point is that if glibc was no longer an option, hypothetically because a GPL violation necessitates its removal, a work-around is available. Yes, this might be tantamount to a team of surgeons operating around the clock to put in an artificial heart that buys the patient time until a real heart transplant is possible; no, it does not equal a dead patient.

“arguments that speak against referring to a system by the name of its kernel also speak against using the names of individual libraries, build-tools, and whatnots.”

except that you are oversimplifying the “linux is just a kernel” argument, failing to understand what people actually want with the name “gnu/linux,” not aware of why they want it, and making the name out to be more important than what the name refers to.

Not at all: The only way I can see to make your statement make sense is to posit that “GNU/Linux” would actually be enough to cover the entire OS (at a minimum) or the OS + a considerable portion of the rest of the system. This, however, is not even close to being the case. It is conceivable that a working “GNU/Linux” (only) system is buildable today, but it would not be the equivalent of e.g Debian, Fedora, Suse, …

As to name vs. thing, cf. above.

“I grant that Linux would conceivably not exist today without the presence of GNU in the past”

nor the present.

Prove your assertion. Remove GNU today, where do you see the insurmountable obstacle? (As opposed to the far more likely transitional period of blood, sweat, and tears.)

“if we speak of the system as a whole (the sensu lato), I refer to my post for a discussion why GNU is no longer important enough to define the system.”

now it is a discussion. it was an assertion, which leaned a bit on misunderstanding and special pleading and ad hom.

I strongly disagree.

“I do. Cf. above and your apparent confusion of what GNU is.”

i am not “confused” about what gnu is. gnu was from the beginning, a fancy-pants latin phrase (“the whole thing.”)

since the 1990s, a bunch of people have suggested that it is just a bunch of applications that everyday people dont really use.

your argument is built around the suggestion being a fact.

See the general discussion for various meanings and your incorrect interpretation.

given that the conclusion of your argument is that we should agree with them, i would call your entire argument circular.

Your claim makes no sense, shows that you have not understood me, and raises doubts as to whether you understand what a circular argument is.

we dont have to rewrite history. however, i would say you argue (quite unintentionally, beacuse i think you really do misunderstand the nature and premise of your own argument) that history doesnt deserve to not be rewritten.

it is not necessary to rewrite history to refer to gnu and “linux” instead of gnu/linux.

Again, you make no sense.

it is necessary to rewrite glibc, the gpl, and reestablish so much of gnu that you generously refer to as “linux,” in order to make most of the PREMISES of your argument into facts.

If you believe that, you definitely have not understood what I am actually saying.

if the premises are false, the argument isnt sound. in your reply, you spend a lot of time defending the logic of your argument based on a more hypothetical premise.

the premise of your argument was just false. the logic is heavily just assertion.

You have not shown that my premise is false; yet seem to rely on faulty premises or faulty understandings, yourself.

there isnt any need for ad hom, its simply wrong. but thats not important.

Where have I used ad hominem? Do you understand what this actually implies?

what matters is that in ten years, people will still be trying to get “gnu” removed from “gnu/linux.” and we can have this debate all the way there. i do hope we get breaks for the restroom though.

I do not see that as something that really matters and the opinion that “GNU” does not belong in the name is likely to grow stronger for reasons that include a further lessening of GNUs practical relevance, a smaller proportion of people who at all know of GNU, and a growing importance of both the distribution aspect and the desktop aspect.

“I do consider free software highly beneficial, but free software is not a core priority of Linux”

and that is exactly why stallman says it shouldnt be called just linux. because free software is not a core priority of it.

thats his entire argument. if you care about free software, call it gnu.

it has nothing to do with percentages of code, it has nothing to do with riding coattails.

it has everything to do with why gnu was created in the first place. not to write glibc, not to give you a web browser.

gnu was created to give the user freedom. and if you care about that, calling it “linux” ignores the original purpose, paints something relevant in modern times into enough obscurity that people think its just about user applications— and lets “linux” come along and assert boldly that freedom doesnt matter.

its not about ego, religion, or percentage of code. its about whether you care about freedom or not.

See the general discussion for why this is a faulty argument when it comes to the name.

funny thing, its always implied that stallman is just nitpicking, but year after year (after year after year) open source nitpicks that “gnu” isnt important enough to be in the name.

I, personally, have implied no such thing. That “GNU” does not belong is not nit-picking.

free software never convinces everyone to add “gnu” and open source never convinces everyone to drop it, but both sides continue to nitpick this for decades.

The division into free and open software should not play a role when discussing the name issue. If it does, something is fundamentally wrong with the approach.

your argument is most likely honest, if lacking context and history. the argument itself has its own history, though open source doesnt learn from the failure of the argument youre making, it just keeps reasserting it.

the history of your argument is that it is constantly made— 20+ years running now.

i made it myself, over a decade ago— i abandoned it because it was silly.

From what I have seen so far, the lack of historical and contextual understanding seems to be more on your side, with the one reservation that I was actually not aware of the extensive history of the argument. The reasoning you apply today, e.g. that what I refer to as moral superiority above should affect the name, is the silly part.

Written by michaeleriksson

April 25, 2018 at 6:25 am

“Linux” vs “GNU/Linux”

with 5 comments

A sometime claim is that “Linux” is an inappropriate term (when not referring specifically to the kernel) and that “GNU/Linux” would be better—especially by Richard Stallman, who is the founder and main force behind GNU…

However, this view is at best outdated*—at worst, it is an attempt to ride on the coat-tails of a better-known-among-the-broad-masses project. Most likely, however, it is a sign that Stallman is too fixated on his own vision of “GNU/HURD”**, and is unable to see that there are other perspectives on the world: Since his focus is on GNU, those who use Linux instead of HURD obviously appear to use “GNU/Linux” instead of “GNU/HURD”. This, however, has very little relevance for the typical Linux user:

*GNU used to be a much bigger deal than it is today, for reasons both of changing user demographics/behaviors/wants and of an increased set of alternative implementations and tools. Certainly, Linux (in any sense) would have had a much tougher time getting of the ground without GNU.

**HURD was conceived as the kernel-complement to GNU roughly three decades ago—and has yet to become a serious alternative to e.g. Linux.

The general criticism that Linux is just the kernel and that the user experience is dominated by user programs (and other non-kernel software, e.g. a desktop) is quite correct. (This can be seen wonderfully by comparing an ordinary Linux computer and an Android smart-phone: They have very little in common in terms of user experience, but both use a Linux kernel. Conversely, Debian has made releases that use a non-Linux kernel.) However, in today’s world, most Linux users simply do not use many GNU programs, they have correspondingly little effect on the user experience, and a functioning Linux system entirely without them* is conceivable.

*The main problem being “hidden” dependencies. For instance, most Linux computers use GRUB for booting and GRUB is a GNU tool. However, none of these hidden dependencies are beyond replacement.

For instance, a typical Linux user might use Firefox or Chrome (both non-Gnu), LibreOffice (non-Gnu), a few media applications (typically non-Gnu), … Even most parts of the OS in an extended sense will typically not be GNU-programs, e.g. the X-server, the window manager, the log-in manager, the network manager, a desktop environment, … The best way to approximate the user experience would likely be to speak of e.g. “distribution/desktop” , e.g. “Debian/KDE”*, especially seeing that most desktop environments insist on providing their own, entirely redundant tools, for tasks that more generic tools already do a lot better, including text editors, music players, image viewers, …

*KDE is a user hostile disaster that I strongly recommend against, but it is likely still the most well-know desktop environment. Generally, not everyone uses any desktop, but most do.

Even those, like yours truly, who actually do use a lot of GNU programs are not necessarily bound to GNU: Most important GNU tools are re-implementations of older tools and there are alternate implementations available even in the open- and free-source worlds. Are the GNU variations of e.g. “ls”, “mv”, “awk”, better than the others? Possibly. Would it kill someone to switch? No. Even a switch from Bash to Ksh or Zsh would not even be close to the end of the world. Admittedly, there might be some tools that are so significantly better in the GNU-version that users would be very troubled to switch (gcc?) or are not drop-in replacements (e.g. gnumeric). These, however, typically are either developer tools or have a small user basis for other reasons. Most modern users will not actively use a compiler—or will not need the extras of gcc for their trivial experiments. Most users will opt for a component of an office suite (e.g. LibreOffice) over gnumeric. Etc.

For that matter, even on the command line, my two most extensively used programs (vim, mplayer) are not from GNU either…

Yes, using “Linux” is misleading (but generally understood correctly from context); no, using “GNU/Linux” is not an improvement. On the contrary, “GNU/Linux” is more misleading, shows a great deal of ignorance, and should be avoided in almost all cases*.

*An obvious exception would be a situation where GNU is the core topic and a contrast between GNU-with-the-one-kernel and GNU-with-the-other-kernel is needed.

GNU still plays a very valuable role through providing free-software alternatives for many purposes. This role, however, is not of a type that it justifies “GNU/Linux”.

As an aside, Stallman’s own arguments focus unduly on the free-software aspect: Most of his text seems to argue that GNU is valuable through being more keen of free software than Linux—something which is entirely irrelevant to the question of naming. (In general, Stallman appears to see free software as a quasi-religious concern, trumping everything else in any context.)

Written by michaeleriksson

April 14, 2018 at 4:33 am

Meltdown and Spectre are not the problem

with one comment

Currently, the news reporting in the IT area is dominated by Meltdown and Spectre—two security vulnerabilities that afflict many modern CPUs and pose a very severe threat to at least* data secrecy. The size of the potential impact is demonstrated by the fact that even regular news services are paying close attention.

*From what I have read so far, the direct danger in other regards seems to be small; however, there are indirect dangers, e.g. that the read data includes a clear-text password, which in turn could allow full access to some account or service. Further, my readings on the technical details have not been in-depth and there could be direct dangers that I am still unaware of.

However, they are not themselves the largest problem, being symptoms of the disease(s) rather than the disease it self. That something like this eventually happened with our CPUs, is actually not very surprising (although I would have suspected Intel’s “management engine”, or a similar technology, to be the culprit).

The real problems are the following:

  1. The ever growing complexity of both software and hardware systems: The more complex a system, the harder it is to understand, the more likely to contain errors (including security vulnerabilities), the more likely to display unexpected behaviors, … In addition, fixing problems, once found, is the harder, more time consuming, and likelier to introduce new errors. (As well as a number of problems not necessarily related to computer security, notably the greater effort needed to add new features and make general improvements.)

    In many ways, complexity is the bane of software development (my own field), and when it comes to complicated hardware products, notably CPUs, the situation might actually be worse.

    An old adage in software development is that “any non-trivial program contains at least one bug”. In the modern world, we have to add “any even semi-complex program contains at least one security vulnerability”—and modern programs (and pieces of hardware) are more likely to be hyper-complex than semi-complex…

  2. Security is something rarely prioritized to the degree that it should be, often even not understood. In doubt, “Our program is more secure!” is (still) a weaker sales argument than “Look how many features we have!”, giving software manufacturers strong incentives to throw on more features (and introduce new vulnerabilities) rather than to fix old vulnerabilities or to ensure that old bugs are removed.

    Of course, more features usually also lead to greater complexity…

  3. Generally, although not necessarily in this specific case: A virtual obsession with having everything interfacing with everything else, especially over the Internet (but also e.g. over mechanisms like the Linux D-bus). Such generic and wide-spread interfacing brings more security problems than benefit; for reasons that include a larger interface (implying more possible points of vulnerability), a greater risk to accidentally share private information*, and the opening of doors for external enemies to interact with the software and to deliberately** send data after a successful attack.

    *Be it through technical errors or through the users and software makers having different preferences. For an example of the latter, consider someone trying to document human-rights violations by a dictatorship, and who goes to great length to keep the existence of a particular file secret, including keeping the file on an encrypted USB drive and cleaning up any additional files (e.g. an automatic backup) created during editing. Now say that he opens the file on his computer—and that the corresponding program immediately adds the name and path of the document to an account-wide list of “recently used documents”… (Linux users, even those not using an idiocy like Gnome or KDE, might want to check the file ~/.local/share/recently-used.xbel, should they think that they are immune—and other files of a similar nature are likely present for more polluted systems.)

    **With the particularly perfidious variation of a hostile maker of the original software, who abuses an Internet connection to “phone home” with the user’s private information (cf. Windows 10), or a smart-phone interface to send spam messages to all addresses in the user’s address book, or similar.

To this might, already or in the future, government intervention, restrictions, espionage, whatnot, be added.

The implications are chilling. Consider e.g. the “Internet of things”, “smart homes”, and similar, low benefit* and high risk ideas: Make your light-bulbs, refrigerators, toasters, whatnot, AIs and connect them to the Internet and what will happen? Well, sooner or later one or more of them will be taken over by a hostile entity, be it a hacker or the government, and good-bye privacy (and possibly e.g. money). Or consider trusting a business with a great reputation with your personal data, under the solemn promise that they will never be abused: Well, the business might be truthful, but will it be sufficiently secure for sufficiently long? Will third-parties that legitimately** share the data also be sufficiently secure? Do not bet your life on it—and if you “trust” a dozen different businesses, it is just a matter of time before at least some of the data is leaked. Those of you who follow security related news will have noted a number of major revelations of stolen data being made public on the Internet during the last few years, including several incidents involving Yahoo and millions of Yahoo users.

*While there are specific cases where non-trivial benefits are available, they are in the minority—and even they often come with a disproportional threat to security or privacy. For instance, to look at two commonly cited benefits from this area: Being able to turn the heating in ones apartments up from the office shortly before leaving work, or down from a vacation resort, is a nice feature. Is is more than a nice-to-have, however? For most people, the answer is “no”. Do I actually want my refrigerator to place an order with the store for more milk when it believes that I am running out? Hell no! For one thing, I might not want more milk, e.g. being about to leave for a vacation; for another, I would like to control the circumstance sufficiently well myself, e.g. to avoid that I receive one delivery for (just) milk today, another for (just) bread tomorrow, etc. For that matter, I am far from certain that I would like to have food deliveries be a common occurrence in the first place (for reasons like avoiding profile building and potential additional costs).

**From an ethical point of view, it can be disputed whether this is ever the case; however, it will almost certainly happen anyway, in a manner that the business considers legitimate, the simply truth being that it is very common for large parts of operations to be handled by third-parties. For example, at least in Germany, a private-practice physician almost certainly will have lab work done by an external contractor (who will end up with name, address, and lab results of the patient) and have bills handled by a factoring company (who will end up with name, address, and a fair bit of detail about what took place between patient and physician)—this despite such data being highly confidential. Yes, the patient can refuse the sharing of his data—but then the physician will just refuse taking him on as a patient… To boot, similar information will typically end up with the patient’s insurance company too—or it will refuse to reimburse his costs…

On paper, I might look like a hardware makers dream customer: In the IT business, a major nerd, living behind the keyboard, and earning well. In reality, I am more likely to be a non-customer, to a large part* due to my awareness of the many security issues. For instance, my main use of my smart-phone is as an alarm clock—and I would not dream of installing the one-thousand-and-one apps that various businesses, including banks and public-transport companies, try to shove down the throat of their customers in lieu of a good web-site or reasonably customer support. Indeed, when we compare what can be done with a web-site and with a smart-phone app (in the area of customer service), the app brings precious little benefit, often even a net detriment, for the customer. The business of which he is a customer, on the other hand, has quite a lot to gain, including better possibilities to control the “user experience”, to track the user, to spy on other data present on the device, … (All to the disadvantage of the user.)

*Other parts include that much of the “innovation” put on the market is more-or-less pointless, and that what does bring value will be selling for a fraction of the current price to those with the patience to wait a few years.

Sadly, even with wake-up calls like Meltdown and Spectre, things are likely to grow worse and our opportunity to duck security risks to grow smaller. Twenty years from now, it might not even be possible to buy a refrigerator without an Internet connection…

In the mean time, however, I advice:

  1. My fellow consumers to beware of the dangers and to prefer more low-tech solutions and less data sharing whenever reasonably possible.
  2. My fellow developers to understand the dangers of complexity and try to avoid it and/or reduce its damaging effects, e.g. throw preferring smaller pieces of software/interfaces/whatnot, using a higher degree of modularization, sharing less data between components, …
  3. Businesses to take security and privacy seriously and not to unnecessarily endanger the data or the systems of their customers.
  4. The governments around the world to consider regulations* and penalties to counter the current negative trends and to ensure that security breaches hurt the people who created the vulnerabilities as hard as their customers—and, above all, to lay off idiocies like the Bundestrojaner!

    *I am not a friend of regulation, seeing that it usually does more harm than good. When the stakes are this high, and the ability or willingness to produce secure products so low, then regulation is the smaller of the two evils. (With some reservations for how well or poorly thought-through the regulations are.)

Written by michaeleriksson

January 7, 2018 at 1:08 am