Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘software

Problems with adduser

leave a comment »

Doing some light administration, I stumbled upon a few idiocies in the Linux tool “adduser”* and the associated config-file /etc/adduser.conf on my Debian** system.

*Used, unsurprisingly, to add new users to the system.

**Who is to blame for what is not clear. As I have grown increasingly aware over the last few years, the Debian developers make quite a few poor decisions and changes to various tools and defaults that do not correspond to the wishes of the original tool-makers or that violate common sense and/or established “best practices”.

  1. This tool is the source of the “skeleton” files* added during user creation (well, I know that already) and contains a config variable for the corresponding location (SKEL)—but provides no obvious way of turning this idiocy of entirely. (Although a secondary config variable for blacklisting some files, SKEL_IGNORE_REGEX, can probably be (ab-)used for this purpose; the better way is likely to just keep the directory empty.)

    *Presumably so called because they provide a metaphorical skeleton for the users home directory. Note that there are other mechanisms that create unwanted contents in home directories. (One example is discussed in an earlier post.)

    Why is this an idiocy? Well, while their might be some acceptable use case for this, the typical use is to fill the respective users home directory with certain default configurations. This, however, simply does not make sense: Configuration settings should either be common, in which case they belong in global files, not in individual per-user copies; or they should be individual, and then the individual user should make the corresponding settings manually. In neither case does it make sense to copy files automatically.

    Indeed, in the former case, the users and administrators are put out, because (a) the skeleton files must be kept synchronized with the global configuration files (to boot in an unexpected and counter-intuitive manner), (b) the users get a snap-shot of the configuration—the configuration as it was at the time of user creation, without any changes that were made later.

    In the latter case, a user who wants his own config file can simply copy the global configuration file manually should he at all want it.*

    *Experienced users tend to not want anyone elses preferences in their config files, and have often made too extensive changes or are set on a certain set of values they have used for years, implying that they will likely not want the global files to begin with.

    Now, one comparatively rare case where skeleton files could make sense, is when setting up several sets of users that have different characteristics (e.g. administrators, Java developers, C developers)—but that will not work with this mechanism, because the skeleton files are common for all users. In order to get this to work, one would have to provide an entire new config file or play around with command-line settings—and then it is easier to just create the users without skeleton files and then copy the right files to the right users in a secondary step.

    As an aside, I do not recommend the sometime strategy of having a user-specific config file call a global config file to include settings from there (as might have been a workaround in the case of skeleton files): This tends to lead to confusion and unexpected effects, especially when the same user-specific config file is used on several systems (e.g. a work and a home computer), when global config changes, or when something is done in an inconsistent order. Instead, I recommend the individual user to either use only the global file or only his own.

  2. When the home directory is created, the typical access-control defaults (“umask”) are ignored* in favor of the config variable DIR_MODE—and this variable has the idiotic and inexcusable value 0755**. In other words, home directories are created in such a manner that everyone can read*** the directory per default. It is true that this will not give the rights to read the contents of files****, but being able to see file names can be bad enough in it self: Consider e.g. if the wrong person sees names like “resignation_letter”, “proposal_plan”, “porn”, “how to make a bomb”, …

    *Such duplication of responsibility makes it harder to keep security tight, especially since the admins simply cannot know about all such loopholes and complications—if in doubt because they change over time or can vary from Unix-like system to Unix-like system.

    **The best default value is 0700, i.e. “only the owner can read”; in some cases, 0750, i.e. “owner and group members can read”, might be an acceptable alternative.

    ***To be more specific, list the directory contents and navigate the directory. (Or something very close to this: The semantic of these values with regard to directories are a bit confusing.)

    ****Files (and sub-directories) have their own access rights that do respect the value of the umask (at their respective creation).

  3. The format of a user name underlies restrictions by the configuration variable NAME_REGEX. Unfortunately, this variable only appears to add restrictions. Quoting the documentation:

    adduser and addgroup enforce conformity to IEEE Std 1003.1-2001, which allows only the following characters to appear in group and user names: letters, digits, underscores, periods, at signs (@) and dashes. The name may no start with a dash. The “$” sign is allowed at the end of usernames (to conform to samba).

    An additional check can be adjusted via the configuration parameter NAME_REGEX to enforce a local policy.

    This is unacceptable: Unix-like systems typically accept almost any character in a username, and what name schemes or restrictions are applied should be a local decision—not that of some toolmaker.

    For reasons of interoperability through a “least common divisor”, it makes great sense to apply some set of restrictions per default; however, these restrictions must be overrideable and should have been integrated in NAME_REGEX (or a second, parallel variable).

    As an aside, I am quite surprised that “@” is allowed per default, seeing that this character is often used to connect a user with e.g. a domain or server name (as with email addresses). When the user name, it self, can contain an “@” it becomes impossible to tell for certain whether “X@Y.Z” is a user name (“X@Y.Z”) or whether it is a user name (“X”) combined with a server or domain (“Y.Z”). In the spirit of the aforementioned “least common divisor”, I would not only have expected this to be forbidden—but to be one of the first and most obvious things to be forbidden. (I would speculate that there is some legacy issue that requires that “@” remains allowed.)

Advertisements

Written by michaeleriksson

July 3, 2018 at 4:44 am

XDG, lack of respect for users, and bad design

with one comment

Normally and historically, using Linux has been much more pleasant than using MS-Windows, at least for those who stay away from KDE, Gnome, et co. Unfortunately, there has long been a negative trend towards worse usability, greater disregard for the user’s right to control his own computer, etc. (See e.g. [1] or [2] for similar discussions.)

Today, after doing some experiments with various X* and WM setups, I found something utterly inexcusable, a truly Microsoftian disregard for the user’s interests:

*By which I refer to the window system named “X”—unlike many other instances where I have used it as a placeholder.

Somehow, a tool xdg-user-dir had been activated for the first time during my experiments (whether this tools was installed per default or has snuck its way in during my experiments, I do not know)—and promptly created a slew of directories in one of my user accounts. There was no inquiry whether I wanted them created, they were just silently created. To boot, they were exactly the type of directories that I do not want and that I have always deliberately deleted when they were present on installation: Random directories like “Desktop”, “Pictures”, “Music” bring me no value* whatsoever and do not in any way, shape, or form match my computer use—starting with the fact that I use different user accounts for different tasks and order my files and whatnot according to entirely different criteria**. They clutter my file system—nothing more, nothing less.

*Note that, unlike with MS Windows, these are not necessary for the OS not to throw a fit. (Although it is conceivable that one of the worse desktop environments would—but then I do not use these!)

**The exact criteria vary, but the most common is “by topic”, rather than “by type”. For instance, a video and an eBook on the same topic would land in the same directory; two eBooks on different topics would land in different directories.

Having a look at the documentation, the functioning of this tool appears to be fundamentally flawed, even apart from its presumptuousness: In my reading, the directories would simply have been created again, and again, and again, had I just deleted them. This too is inexcusable—a manually deleted directory should stay deleted unless there are very strong reasons speaking to the contrary*/**. Now, I have the computer knowledge and sufficient drive that I actually could research and solve this issue, through finding out what idiotic program had added the directories and make sure that a repetition did not take place (and do so before my other user accounts were similarly molested)—but this is still a chunk of my time that I lost for absolutely no good reason, just because some idiot somewhere decided that he knew better than I did what directories I should want. For many others, this would not have been an option—they would have deleted the directories, seen them recreated a while later, deleted them again, …, until they either gave up with a polluted user account or screamed in frustration.

*Such reasons would typically involve a directory or file that is needed for practical operations while not being intrusive to the user (almost always implying a “dot” or “hidden” file). Even here, however, querying the user for permission will often be the right thing to do. To boot, it is rarely the case that an application actually needs to create anything in the user account, when not explicitly told to do so (e.g. through a “save” action by the user). Off the top of my head, I can only think of history files. For instance, creating a config file is only needed when the user actually changes something from the default config (and if he wants his config persistent, he should expect a file creation); before that it might be a convenience for the application, but nothing more. Temporary files should be created in the temp-directory, which is in a central place (e.g. /tmp) per default (and should the user have changed it, there is an obvious implicit consent to creation). Caching is a nice-to-have and would also often take place in a central location. Indeed, caching is an area where I would call for great caution and user consent both because of the potential privacy issues involved and because caching can take up quite a bit of storage space that the user might not be aware of. Finally, should the user wish to save something, it is up to him where to save it—he must not be restricted to an application specific directory. All-in-all, most cases where an application “needs” a directory or file will be pseudo-needs and signs of poor design.

**Looking specifically at the directories from above, I note that they are not hidden—on the contrary, they are extremely visible. Further, that they almost certainly are intended for one or both of two purposes: Firstly, to prescribe the user where he should put his this-and-that, something which is entirely unacceptable and amateurish—this is, remains, and must be the user’s own decision. Secondly, to ensure that various applications can rely on these directories being present, which is also entirely unacceptable and amateurish: Applications should not make such assumptions and should be capable of doing their job irrespective of the presence of such directories. On the outside, they can inquire whether a missing directory should be created—and if the offer is turned down, the applications still need to work. If they, unwisely, rely on the existence of, say, a picture directory at all, it should also be something configurable. They must not assume that it is called “Pictures”, because the user might prefer to use “Images” and already have other tools using that directory; similarly, they must not assume that the directory, irrespective of name, is in a given position in the file system, because the user might prefer e.g. “~/Media/Pictures” over “~/Pictures”; he might even have put all his pictures on a USB-drive or a server, potentially resulting in entirely different paths.

Looking up XDG on the web gives a negative impression. For instance, the Wikipedia page has a list of hosted packages/projects, some of which are among my least favorite, because they are e.g. redundant, do more harm than good, or replace superior solutions, including D-Bus, PulseAudio, systemd, and Poppler*. To boot, they often violate the design principles that used to make Unix and its derivatives great. Some others leave me ambivalent, notably Wayland: Even assuming that X needs replacement or improvement**, Wayland seems to be the wrong way to go through reducing both flexibility and separation of concerns.***

*At least with regard to how it has screwed up xpdf.

**It quite probably does, about three decades (!) after the last major specification release. However, in my own use, I cannot think of anything major that bothers me.

***With the reservation that I have not read anything on Wayland in years and am not aware of the latest state: Because I am not a Ubuntu user, I have not been forced to a practical exposure.

Looking at the “Stated aims” further down the page, most are good or neutral (with some reservations for interpretation); however, “Promote X desktops and X desktop standards to application authors, both commercial and volunteer;” is horribly wrong: Promoting such standards to desktop authors is OK, but not to application authors. Doing the latter leads to unnecessary dependencies, creates negative side-effects (like the unwanted directories above), and risks us landing in a situation where the system might need a desktop to function—not just a window manager or just a terminal.

For instance, I have now tried to uninstall everything with “xdg” in its name. This has left me with “xdg-utils” irremovable: If I do uninstall it, a number of other packages will go with it, including various TeX and LaTeX tools that have nothing to do with a desktop. In fact, there is a fair chance that they are all strictly command-line tools…

I have also searched for instances of “freedesktop” (a newer name, cf. Wikipedia). Trying to uninstall “hicolor-icon-theme” (admittedly not something likely to be a source of problems) leads to a request to also uninstall several dozen packages, many (all?) of which should reasonably still work without an external icon theme. By all means, if icons can be re-used between applications, try to do so; however, there must be sufficient basic “iconity” present for a good program to work anyway—or the programs must work sufficiently well without icons to begin with. Indeed, several of these are either command-line tools (that should not rely on icons in the first place) or make no or only minimal use of icons (e.g. pqiv and uzbl).

Worse, chances are that a considerable portion of these tools only have an indirect dependency: They do not necessarily need “hicolor-icon-theme” (or “xdg-utils”). Instead they rely on something else, e.g. a library that has this dependency. Here we can clearly see the danger of having too many dependencies (and writing too large libraries)—tools that do not need a certain functionality, library, whatnot, still need to have it installed in order to function. This leads to system bloat, greater security risks, and quite possibly diminished performance. Unfortunately, for every year that goes by, this problem appears to grow worse and worse.

Forcing XDG functionality and whatnot into applications that do not actually need them is a bad thing.

(Similarly, a great deal of my skepticism against D-Bus arises from the fact that it is “needed” by every second application, but is rarely actually used for something sensible. In the vast majority of cases, the application would have been just as good without D-Bus, but it still has a dependency and it still presumes to start an unwanted D-Bus at will—which it typically does not clean up afterwards…)

Written by michaeleriksson

May 8, 2018 at 1:31 pm

Follow-up: Linux vs. GNU/Linux

with 6 comments

In light of a lengthy reply by a user codeinfig to an earlier post on the issue of “Linux” vs. “GNU/Linux”, I revisit this topic.

This in two parts: An extension of the original discussion (partially driven by the reply, but mostly held abstract) and a more specific rebuttal of said reply (formulated in terms of a direct answer).

General discussion:

  1. At the time of my original post, I actually was not aware of the amount of controversy that surrounded this issue, mostly seeing Stallman’s position as an example of flawed thinking, and my intention was (like in much of my previous writings) to point to such flaws. (Possibly, because commercial Unixes dominated my experiences of the Unix-like OSes until the early new millennium.)

    With hindsight, it was highly naive of me to not expect the topic to be “hotter”: This is the Internet, and more or less any question that could cause controversy will be discussed/argued/flame-warred at length—even be it something so trivial seeming as a name. To boot, this issue appears to be almost as old as Linux, giving it plenty of time to have been discussed.

  2. I stress again that I do not claim that “Linux” is an appropriate name when we do not speak of the kernel (cf. previous statements). However, “GNU/Linux” does not solve the involved problem. On the contrary, it compounds it further, because arguments against using “Linux” are the stronger against “GNU/Linux”. (However, this might very well have been different in the 1990s.)
  3. “GNU”, on its own, has at least three different meanings: The OS envisioned by Stallman, the GNU project, and the GNU set of tools/programs. Of these, I would consider the first the least relevant today, because this OS has simply not materialized in its full intended form, even after several decades, and I honestly cannot recall when I last heard that meaning used prior to this discussion. Even as early as 1994, when I started college and made my first contacts with Unix (specifically SunOS), the tools were arguably more important than the OS: The default setup after login consisted of one instance of Bash (running in an Xterm) and one instance of GNU Emacs; the main computer work for the first semester consisted of writing and executing LISP programs using Emacs—even under a commercial Unix version, with its own set of pre-existing editors, shells, whatnots, GNU tools had been preferred. An alternate editor and semi-imitation of Emacs, MG, was typically referred to as “Micro-GNU”*, showing the relative importance of Emacs within the GNU sphere at the time.

    *The actual name was “Micro-GNU-Emacs”, with the intended focus on “Emacs”, with “GNU” only serving to avoid confusion with other (less popular) variations of Emacs. (A distinction that hardly anyone bothers with today, “Emacs” being used quasi-synonymously with “GNU Emacs”, just like “Windows” usually contains an unspoken “Microsoft”.) However, so dominant was Emacs in the perception of GNU that most people shortened the wrong component of the name…

    But, by all means, let us go with the OS-meaning*: We say “GNU” and mean an OS. Even now, however, the use by codeinfig does not make sense. He appears to use “GNU” (resp. “gnu”) as an all-encompassing term for the OS in a very wide sense** or even the whole system, effectively saying that not only is e.g. a Debian system “GNU/Linux”—it is actually “GNU”… This goes towards the absurd, because even when we speak of “GNU” as an OS, the possible interpretations are 1) the whole original vision by Stallman, i.e. “GNU/HURD” and 2) just the “GNU” part of “GNU/HURD” (resp. “GNU/kernel-of-your-choice”). If we take the first, Debian is only even a GNU candidate when the HURD kernel is used (which will not be the case when we have a Linux-version of Debian) and speaking of just “GNU” in the manner of codeinfig is clearly wrong; even speaking of “GNU/Linux” would be clearly wrong a priori. If we take the second, “GNU/Linux” would still be conceivable (before looking at other aspects of the issue), but the equation GNU = GNU/Linux would be obviously incorrect.

    *For simplicity of discussion I will try to stick to this meaning in the rest of the text, where the difference between the three matter and where the right choice is not clear from context. Note that earlier references do not necessarily do so.

    **An annoying problem in this situation is that it is very hard to define the border between OS and application in such a manner that everyone is happy and all circumstances are covered. A more fine-grained terminology would be beneficial, just like dividing the year into winter and summer would be simplistic. (However, this is secondary to the current discussion.)

  4. If we do use the OS meaning, then, yes, I would consider GNU mostly irrelevant today. It is of historical importance and it might very well grow important again, but today it is dwarfed by Linux, various BSD variants (arguably including MacOS), and possibly even the likes of OpenSolaris and its derivatives. And, no, this OS is not what e.g. I have running right now.

    On the other hand, the GNU tools/programs and the GNU project are highly relevant and immensely valuable to the world of non-commercial/-proprietary computing.

  5. GNU/Linux systems are certainly conceivable: Take GNU (in the sense of an OS without the kernel) and add Linux as a kernel. Such systems might even be present today. A typical Linux-kerneled* distribution, however, is simply not an example of this.

    *See what I did there!

  6. Some seem to think that “because system A uses GNU components it is GNU” or “[…] it should use GNU in its name”. This line of reasoning does not hold up: It is simply not practical to mention every aspect of a system (be it in IT, Formula One racing, or house building), and GNU does not today play so large a part that it warrants special treatment over all other aspects, including e.g the X server and associated tools or the desktop. Again, this might have been different in the 1990s, but today is today. Cf. my first post.

    Notably, any even semi-typical Linux-kerneled system of today runs a great variety of software from a number of sources, and limitations in naming like “Linux”, “GNU/Linux”, whatnot, simply make little sense. Let a user name his five or ten most used “regular” applications and his desktop or window manager (depending on what is central to him), and we know something about his system, his needs, and his user experience. For most users, the rest is just an invisible implementation detail. Hell, many only use even the command line as a last resort… (To their own major loss.)

    That GNU possibly was the first major attempt at a free or open-source OS is not relevant either. Consider by analogy Project Gutenberg: Its founder claims* to be the first to think of the concept of eBooks: Should any party dealing with eBooks be forced to include “Gutenberg” in its name, resulting e.g. in the “Gutenberg/Tinder” reader? Or should ordinary book publishers be forced to refer to the original Gutenberg, for using printing presses? No—both notions are absurd. They might deserve to be honored for early accomplishments and, certainly, someone might chose to voluntarily name something in honor (as Project Gutenberg did with the original Gutenberg)—but no obligation can conceivably be present.

    *I very much doubt that this is true. Yes, his idea goes back to, IIRC, the early 1970s or late 1960s, but even back then it cannot have been something entirely unthought of, be it as a vision for the (real) future or as sci-fi. Vannevar Bush published ideas several decades earlier that at least go somewhat in the same direction.

  7. Some arguments appear to go back to a variation of moral superiority, as with Stallman’s arguments (also linked to in my original post) or codeinfig’s below. Notably: Linux is not free (in the sense of free software etc.) enough/does not prioritize freeness enough; ergo, GNU is morally superior and should be given precedence. This too is a complete non sequitur that would lead to absurd consequences, especially because the different parties have different priorities for deliberate reasons.

    Someone who does share GNU’s priorities might, for all I care, chose to voluntarily include GNU as a part of the name of this-or-that. However, no obligations can exist and those who do not share said priorities have absolutely no reason to follow suit. More: It would be hare-brained if they did…

As for the more specific reply*, I start by noting that there are clear signs that you** have not understood (or misrepresent) what I am actually saying, and that it is hard to find a consistent line of reasoning in your text (your language does not help either). If you want another iteration of the discussion, I ask you to pay more attention to these areas.

*I have left some minor parts of the original reply out. There can be some changes in typography and formatting, for technical reasons. I have tried to keep the text otherwise identical, but it is conceivable that I have slipped somewhere during editing or spell-checking—always a danger when quoting extensively. The full original text is present under the link above, barring any later changes by codeinfig.

**I address codeinfig directly from here on.

“whether the emphasis is on GNU alone or GNU and HURD in combination matters little for the overall argument.”

it covers more of the argument than you realise, and it is the flaw in your argument.

you are making gnu out to be a tiny subset of what it is, and making it less significant (details aside, you are greatly diminishing what it is) so that it pales next to “linux.”

this is unfair for several reasons— first, you do not understand what gnu is. you think gnu is just some software that isnt useful to the (everyday) user. its a misrepresentation that would lend at least some weight to your argument, if it werent a misrepresentation.

The “GNU” vs “GNU/HURD” distinction only makes sense if we abuse “GNU” in the manner I have dealt with above. The rest is largely a distortion of what I say. In particular, I have never claimed that GNU would pale to Linux (in the kernel sense)—I claim that it pales in comparison to the overall systems. (Which really should be indisputable.) If you re-read my original post, you will find that I clearly point to uses of GNU tools that are not obvious to the end user; however, the simple truth remains that for someone who does not live on the command line or in Emacs, the overall importance of GNU is not so large that it deserves special treatment over some other parties.

You do not seem to understand how many different components of various types and from different sources go into building e.g. a Debian system (be it as a whole or as the OS), with many of them present or not present depending on the exact setup. We simply do not have anything even remotely close to an almost-just-GNU system with Linux dropped in in lieu of HURD, which seems to be your premise.

gnu was “the whole thing” before linux was a kernel. the web browser is not “linux” either, it is a browser. but we call it “linux.” xwindows predates “linux” by nearly a decade, but we call it “linux.”

when we call these things “gnu” you fail to understand that *that is what it was called already* and “linux” is no more a web browser than gnu was at the time, but somehow its ok for linux to presume itself to be all those things, but its “riding coattails” if gnu helps itself by being included.

Here we have several misrepresentations, e.g. the claim that the browser would be called “Linux”—this simply is not the case. Neither were those things already called “gnu” in the past. Notably, in the time before Linux-kerneled systems broke through, the clear majority of users were running commercial Unixes, e.g. SunOS, with GNU either absent or represented through a few highly specific tools, e.g. Emacs. While it is true that GNU was conceived as “the whole thing” (by the standards at its conception), this does not imply that it actually is “the whole thing” when it is included in a greater context and at a much later date. By analogy, if someone launches his own car company A, and another car company B, thirty years later, uses parts delivered from A, other parts from other companies, and parts that it has produced on its own, should B’s products then be referred to as “A” or as “B”? Obviously: “B”. In addition, due to the absence of HURD there is no point of time prior to Linux where GNU actually, even temporarily, was “the whole thing”, making the claims of precedence the weaker.

Note the item on historical influence above.

Note that my original formulation concerning coat-tails, a) referred explicitly to “better-known-among-the-broad-masses”, which is indisputably true and makes no implication concerning e.g. practical importance, b) was used to demarcate the outer end of the spectrum of interpretations of the situation—I never say that Stallman’s intent is to ride the coat-tails of Linux, only that this is the worst case interpretation.

the whole idea that linux is entitled to do this but gnu is not is special pleading *all over the place.*

its special pleading that strawmans the heck out of what gnu is in the first place— with a generous “side” of ad hom for why stallman thinks we should call it that. oh, its his quasireligious views…

No such special pleading takes place. I clearly say that “Linux” is a misnomer—but that “GNU/Linux” is a worse misnomer through compounding the error.

Watch your own strawmanning!

no, his arguments are not quasireligious. they are philosophical and even practical. thats an ad hom attack, and the only thing religion has to do with it is in parody (and other related ad hom from critics.)

If you actually read what he says, you will find that he is quite often religious/ideological and lacking in pragmaticism: He has an idea, this idea is the divine truth, and thou shalt have no other truth. Watch his writing on free this-and-that in particular.

i suspect that at some point (to be fair, you havent yet) you will accuse me of being some kind of stallman devotee. i was an open source advocate first, but i switched to free software after years of comparing the arguments between them. open source is a corporate cult, partly denounced by one of its own founders.

i switched to free software because it lacks the same penchant for rewriting history, for splitting off and then accusing those who didnt follow of “not being a team player,” and basically is more intellectually honest than open source and “linux.”

but its like an open source rite of passage to nitpick about “gnu/linux,” and it tends to follow a formula. you left out the part about how “free” is a confusing word with multiple meanings— sort of like “apple computer.”

I have not yet, and I will not here either—and it would not matter if I did: Your arguments remain the same irrespective of whether you are a devotee or not.

As for your motivations to prefer free over open: They have no relevance to the naming issue.

“To the best of my knowledge, no-one, Stallman included, has suggested that we refer to GNU (!) as GNU/Linux.”

in most instances he does. your definition of gnu is in fact, partial and subset, so he has never suggested we refer to that subset by anything. i dont believe he has ever referenced the subset you call “gnu” at all.

See the general discussion above and why this does not make sense. (But I do not rule out that Stallman too can have said something that does not make sense.)

” `The question is rather whether a Linux (sensu lato) system should be referred to as GNU/Linux.’ “

no, thats a loaded question, and a fine ingredient for a circular argument. “its already called linux.” well, it was already called gnu. but again, *somehow* linux is entitled to do that and gnu isnt, even though gnu was already calling it that.

for stallman and many others who have not been swayed by over a decade of these “dont bother calling it gnu” articles, the question is whether gnu should be referred to as only “linux.”

It was never called GNU and even if it were, you cannot demand that others, building new products, where GNU is a subset, propagate the name ad eternam. Cf. above.

the answer to that question is cultural, and already explained— *if you care about software freedom* then gnu is a signifier. to a programmer this makes sense— its self-documenting.

from a marketing perspective, this is ridiculous. to a “linux” fan (to torvalds himself) this is riduculous. to me, its a *lot* more honest. however, what stallman has done is establish a brand that shows something living up to a promise.

“gnu” is quality control (a brand) for user rights. and linux really isnt. it really really isnt, but why it isnt is a separate debate. im not trying to write you an oreilly book here.

so again— if you want to signify user freedom, call it gnu/linux. (if it were up to me, the /linux would be dropped, stallman was trying to be fair.) if you want to signify whatever the heck “linux” stands for, call it whatever you want. i call fig os “fig os,” but fig os is a gnu/linux distro.

This is the flawed moral argument discussed above. In addition, why should e.g. Torvalds include “GNU” to stress free software when free software is not his priority? If he does not include it, how can I (you, Stallman, …) presume to alter the name based on having another priority?

As an aside, if the reasoning went in the other direction, i.e. “You are not free enough to use our name, so stop using it”, this would make a lot more sense. (Assuming that someone sufficiently non-free did use “GNU”.)

“Not at all: A world-view in which GNU is so important that it would warrant top-billing in the context of a Linux system is outdated—not GNU it self.”

thank you for clarifying, but you have not explained why gnu is not important enough to warrant top-billing, except to say that applications are more important to users who dont know why gnu is important.

Again: There are many components from different sources that make up a system. GNU is just one of them. (If you feel that GNU truly outweighs all others to such a degree that it warrants top-billing, it is up to you to prove this.) Further: What is important to the user is what matters in the end. If, by analogy, Bash or Emacs behaves the same, but is now implemented in Java or C#, it remains Bash resp. Emacs. The developers might be in understandable tears, but the world goes on. Implementations are fungible to the users—the result of the implementation is not. Hell, when I use Vim, Bash, and Tmux* under Cygwin, I have almost the same experience as when working under Debian, even when actually on a Windows machine… Even speaking of “Windows user” and “Linux [or whatnot] user” makes a lot less sense today than it did in the past, and it is often more sensible to speak of e.g. “Bash user” and “Vim user”.

*Note that of the three, only one is a GNU program.

your article is mostly assertions, and you do start to explain some of them though i still think it rests mostly on ad hom and assertion. its a very common set of assertions too— made year after year after year, i even made them myself once long ago.

Ad hominem and assertions pretty much match what I see in your writing…

“I am saying that building e.g. a Debian system without GNU is conceivable.”

thats hardly fair. gnu has been vital to all this for a quarter of a century (bsd can make a similar argument, considering that the only reason gnu was necessary was they were tied up in gaining the rights to their own work.)

I doubt that GNU has been that important for a whole quarter of a century, but even if it has been, that is irrelevant: It is not that important now. This is not a matter of fairness. Light-bulbs were great; today, they have been replaced by LEDs and other newer technologies. (Be it for technical reasons or through legislation.) I do not look up at my ceiling lamp and say a quick prayer to Edison (or one of the other inventors involved in the development of light-bulbs).

you make gnu less essential by creating a strawman version of what it is, so that you can say “but this isnt a good enough reason to warrant top-billing.”

we cant agree on the validity of your argument if you insist on misrepresenting what you weigh the importance of.

in fact, your argument suggests to me that the name is more important than the thing itself. i mean— gnu wouldnt be essential if this mostly-hypothetical thing like gnu were created instead! but thats no reason to call my entire operating system:

gnu! linux!

stallman has said lots of times that the name really isnt important. seems to really go against this whole thing, eh? people miss his asterisk where he says its important that everything gnu exists to accomplish is not forgotten for a side movement that reframes years of work to deliver freedom to the user as “just a practical way to develop software.”

Most of the above is more of you misrepresenting (or misunderstanding) what I am actually saying. As for the name vs. the thing: The topic of my post is the name and flawed reasoning around the name; ergo, I deal with the name and the flawed reasoning.

open source creates the need for this, stallman says “well if youre going to misrepresent everything we do, at least give us three letters of credit for this enormous amount of software youre relying on.” and people say more or less: “wow, the nerve of THIS GUY!”

its so funny because all he wants is for people to not forget that the entire point of all this was free software.

His entire point was free software. Torwalds’ (and many others’) is not. And again: If GNU was given credit in the name, then there are other parties with a similar right.

your argument is whether it should be called gnu/linux or not. but it never addresses the years-old argument of why it should be called gnu— it makes up its own reason, and then steps on it.

it really is a giant strawman. and i appreciate that you are almost certainly sincere and wouldnt create a strawman just to be a jerk. but its still a strawman.

Your claims make no sense, unless you truly are under the misapprehension that a typical systems consists of the kernel, various GNU components, and few trivial other bits. This is very, very far from the truth.

“GNU GPL, however, is irrelevant for the functioning, it would be easy to replicate something similar,”

haha— it isnt irrelevant at all, ask torvalds if it is. it isnt easy to replicate something similar either, and its less easy to get people to use such a thing. pulling off copyleft (when a billion dollar corporation was heavily dedicated to defeating it) was a serious coup. youre making it out to be a bunch of words in a file.

go make a gnu gpl— go ahead. show me. have anybody you can find to help in on it, too. have someone show me how easy it is.

“there are other available licenses. Notably, such other licenses, e.g. something in the Apache or BSD families, are often preferred by people outside GNU, because these parties have other priorities than free software.”

and thats the thing. they dont do what the gpl does, they dont achieve what the gpl does, but you consider them replacements. its apples and oranges.

Firstly, you assume (again) that copy-left, free this-and-that, whatnot is the priority of everyone. It might be your and Stallman’s priority, but it simply is not a global priority—and it is not needed to build e.g. an open-source computer system. Linux, Debian, …, does not need the GPL to exist.

Now, if someone does want a copy-left license? Firstly, there are other copy-left licenses around, if possibly with a somewhat different coverage. Secondly, combining another existing license with aspects of the GPL and/or with a few days research by a lawyer should yield something quite passable. (True, there might be few issue to sort out over time, e.g. due to ambiguities or complications with different jurisdictions, but not something that would require several decades to build.)

“GCC is mostly important for building the system, not for using it.”

special pleading, all over the place.

Not in the least: How would you justify include the compiler used to build the system as a component of the system? (Except for those proportionally rare cases when it is actually used to compile other programs when later using the system.) See also below.

“In fact, even now, many build setups contain explicit checks for the presence of GCC and automatically fallback to CC, should GCC be absent.”

and this is a bit of trivia, because all this stuff we have now would not exist (and would not be maintained) if everyone had to use cc. its like you know the significance of gcc but choose to ignore it when its convenient to your argument.

open source would not exist without gcc. linux might, but not the linux we have today. some little usenet gem that wasnt developed by half as many people— because they needed gcc to do it.

That is a very far-going claim. Can you back it up? I doubt it. In the case of open source in general, it is definitely incorrect, as can be seen by the many projects that use e.g. Java instead of C… Also keep in mind that in the absence of GCC, someone is likely to have started to improve CC or to build a more suitable compiler as need arose. Consider e.g. how GIT came into being; note that GCC as it is today is a very different beast from what it was when Torvalds started his work; note that GCC contains much that is not needed for Linux in the first place (e.g. unused languages) or is not essential for the existence of Linux (e.g. compilation for architectures outside the of the main-stream PC processors).

why even mention cc when you could have talked about clang instead? because i already addressed that when i talked about bsd, and because the part about cc is hypothetical (and at best, unlikely.)

One of several claims that make poor sense even on the sentence level. Besides: When did you address CC? Why would what I say about CC be unlikely?

As for Clang, I was not aware of it until now (but did consider mentioning LLVM in addition to CC, or the possibility of having started the original development with even a non-Unix compiler). However, its existence proves my point: Even if CC would not be a realistic replacement for GCC today, another tool definitely is. And: Other tools tools capable of filling the role of GCC have been possible at any point.

“glibc is possibly the most deeply ingrained dependency (and a better example than my original GRUB); however, this is still just one library.”

and linux is just one kernel. so what? its a monolithic kernel, and glibc is a monolithic library. theyre both enormous. you cant make them smaller by counting units, thats absurd.

glibc is far smaller and the kernel, by its very character, is the core of an OS. (glibc is not even the largest individual library—quite far from it, actually.) What you mean by “counting units” is not clear to me. The relevance of whether they are monolithic or not is lost on me.

“Here too we have the situation that glibc is not used because it is the only alternative, just the best.”

so once again, we shouldnt call it “gnu/linux” because gnu is just a bunch of vital components that arent vital because you could easily replace them with a bunch of drastically inferior alternatives that no one actually wants.

hmpf. yes, im taking some liberties with my version of your argument, but only to try to get its author to appreciate how much of a stretch it is.

“As with GCC, its absense would simply have led to something else being used.”

so *hypothetically*, gnu doesnt deserve top billing. because it could be less important than it is, if it werent.

You miss my point: That if we look at the situation as it is and say “part X is important today; ergo, if part X had never existed, the whole would not exist”, we ignore both the possibility of a replacement that would still have made the whole viable and the considerably likelihood that something else would have evolved over time to fill the same role or that the role would have been covered in a different manner. If we look at the situation today and see that just removing e.g. glibc would cause a given system to fail catastrophically, we cannot conclude that the system would not have existed had glibc not been present in (hypothetically) 1990—and therefore we cannot conclude that the existence of the system is contingent on glibc and, by implication, GNU. As a consequence, when you say “i said without gnu. no gnu gpl, no gcc, no glibc. you go right ahead, since gnu is irrelevant now. remove it, and find out what you get”, the answer is “without GPL, GCC, and glibc, we would see something that is recognizably approximately what we have today”. We might have ended up with an king penguin instead of an emperor penguin, but we are still talking penguins. Now, a scenario that removes GNU entirely from the early Linux development, that could have been a very major problem—but that does not imply that Linux and/or Linux-kerneled systems cannot exist without GNU today or that e.g. glibc is so central that they would never have come into existence without it.

“Even now, keeping the interface intact and replacing the implementation with a non-GNU variation would be technically feasible.”

but then, why should we rewrite glibc just to deny gnu the billing it allegedly doesnt deserve now?

That is not what I suggest: The point is that if glibc was no longer an option, hypothetically because a GPL violation necessitates its removal, a work-around is available. Yes, this might be tantamount to a team of surgeons operating around the clock to put in an artificial heart that buys the patient time until a real heart transplant is possible; no, it does not equal a dead patient.

“arguments that speak against referring to a system by the name of its kernel also speak against using the names of individual libraries, build-tools, and whatnots.”

except that you are oversimplifying the “linux is just a kernel” argument, failing to understand what people actually want with the name “gnu/linux,” not aware of why they want it, and making the name out to be more important than what the name refers to.

Not at all: The only way I can see to make your statement make sense is to posit that “GNU/Linux” would actually be enough to cover the entire OS (at a minimum) or the OS + a considerable portion of the rest of the system. This, however, is not even close to being the case. It is conceivable that a working “GNU/Linux” (only) system is buildable today, but it would not be the equivalent of e.g Debian, Fedora, Suse, …

As to name vs. thing, cf. above.

“I grant that Linux would conceivably not exist today without the presence of GNU in the past”

nor the present.

Prove your assertion. Remove GNU today, where do you see the insurmountable obstacle? (As opposed to the far more likely transitional period of blood, sweat, and tears.)

“if we speak of the system as a whole (the sensu lato), I refer to my post for a discussion why GNU is no longer important enough to define the system.”

now it is a discussion. it was an assertion, which leaned a bit on misunderstanding and special pleading and ad hom.

I strongly disagree.

“I do. Cf. above and your apparent confusion of what GNU is.”

i am not “confused” about what gnu is. gnu was from the beginning, a fancy-pants latin phrase (“the whole thing.”)

since the 1990s, a bunch of people have suggested that it is just a bunch of applications that everyday people dont really use.

your argument is built around the suggestion being a fact.

See the general discussion for various meanings and your incorrect interpretation.

given that the conclusion of your argument is that we should agree with them, i would call your entire argument circular.

Your claim makes no sense, shows that you have not understood me, and raises doubts as to whether you understand what a circular argument is.

we dont have to rewrite history. however, i would say you argue (quite unintentionally, beacuse i think you really do misunderstand the nature and premise of your own argument) that history doesnt deserve to not be rewritten.

it is not necessary to rewrite history to refer to gnu and “linux” instead of gnu/linux.

Again, you make no sense.

it is necessary to rewrite glibc, the gpl, and reestablish so much of gnu that you generously refer to as “linux,” in order to make most of the PREMISES of your argument into facts.

If you believe that, you definitely have not understood what I am actually saying.

if the premises are false, the argument isnt sound. in your reply, you spend a lot of time defending the logic of your argument based on a more hypothetical premise.

the premise of your argument was just false. the logic is heavily just assertion.

You have not shown that my premise is false; yet seem to rely on faulty premises or faulty understandings, yourself.

there isnt any need for ad hom, its simply wrong. but thats not important.

Where have I used ad hominem? Do you understand what this actually implies?

what matters is that in ten years, people will still be trying to get “gnu” removed from “gnu/linux.” and we can have this debate all the way there. i do hope we get breaks for the restroom though.

I do not see that as something that really matters and the opinion that “GNU” does not belong in the name is likely to grow stronger for reasons that include a further lessening of GNUs practical relevance, a smaller proportion of people who at all know of GNU, and a growing importance of both the distribution aspect and the desktop aspect.

“I do consider free software highly beneficial, but free software is not a core priority of Linux”

and that is exactly why stallman says it shouldnt be called just linux. because free software is not a core priority of it.

thats his entire argument. if you care about free software, call it gnu.

it has nothing to do with percentages of code, it has nothing to do with riding coattails.

it has everything to do with why gnu was created in the first place. not to write glibc, not to give you a web browser.

gnu was created to give the user freedom. and if you care about that, calling it “linux” ignores the original purpose, paints something relevant in modern times into enough obscurity that people think its just about user applications— and lets “linux” come along and assert boldly that freedom doesnt matter.

its not about ego, religion, or percentage of code. its about whether you care about freedom or not.

See the general discussion for why this is a faulty argument when it comes to the name.

funny thing, its always implied that stallman is just nitpicking, but year after year (after year after year) open source nitpicks that “gnu” isnt important enough to be in the name.

I, personally, have implied no such thing. That “GNU” does not belong is not nit-picking.

free software never convinces everyone to add “gnu” and open source never convinces everyone to drop it, but both sides continue to nitpick this for decades.

The division into free and open software should not play a role when discussing the name issue. If it does, something is fundamentally wrong with the approach.

your argument is most likely honest, if lacking context and history. the argument itself has its own history, though open source doesnt learn from the failure of the argument youre making, it just keeps reasserting it.

the history of your argument is that it is constantly made— 20+ years running now.

i made it myself, over a decade ago— i abandoned it because it was silly.

From what I have seen so far, the lack of historical and contextual understanding seems to be more on your side, with the one reservation that I was actually not aware of the extensive history of the argument. The reasoning you apply today, e.g. that what I refer to as moral superiority above should affect the name, is the silly part.

Written by michaeleriksson

April 25, 2018 at 6:25 am

“Linux” vs “GNU/Linux”

with 5 comments

A sometime claim is that “Linux” is an inappropriate term (when not referring specifically to the kernel) and that “GNU/Linux” would be better—especially by Richard Stallman, who is the founder and main force behind GNU…

However, this view is at best outdated*—at worst, it is an attempt to ride on the coat-tails of a better-known-among-the-broad-masses project. Most likely, however, it is a sign that Stallman is too fixated on his own vision of “GNU/HURD”**, and is unable to see that there are other perspectives on the world: Since his focus is on GNU, those who use Linux instead of HURD obviously appear to use “GNU/Linux” instead of “GNU/HURD”. This, however, has very little relevance for the typical Linux user:

*GNU used to be a much bigger deal than it is today, for reasons both of changing user demographics/behaviors/wants and of an increased set of alternative implementations and tools. Certainly, Linux (in any sense) would have had a much tougher time getting of the ground without GNU.

**HURD was conceived as the kernel-complement to GNU roughly three decades ago—and has yet to become a serious alternative to e.g. Linux.

The general criticism that Linux is just the kernel and that the user experience is dominated by user programs (and other non-kernel software, e.g. a desktop) is quite correct. (This can be seen wonderfully by comparing an ordinary Linux computer and an Android smart-phone: They have very little in common in terms of user experience, but both use a Linux kernel. Conversely, Debian has made releases that use a non-Linux kernel.) However, in today’s world, most Linux users simply do not use many GNU programs, they have correspondingly little effect on the user experience, and a functioning Linux system entirely without them* is conceivable.

*The main problem being “hidden” dependencies. For instance, most Linux computers use GRUB for booting and GRUB is a GNU tool. However, none of these hidden dependencies are beyond replacement.

For instance, a typical Linux user might use Firefox or Chrome (both non-Gnu), LibreOffice (non-Gnu), a few media applications (typically non-Gnu), … Even most parts of the OS in an extended sense will typically not be GNU-programs, e.g. the X-server, the window manager, the log-in manager, the network manager, a desktop environment, … The best way to approximate the user experience would likely be to speak of e.g. “distribution/desktop” , e.g. “Debian/KDE”*, especially seeing that most desktop environments insist on providing their own, entirely redundant tools, for tasks that more generic tools already do a lot better, including text editors, music players, image viewers, …

*KDE is a user hostile disaster that I strongly recommend against, but it is likely still the most well-know desktop environment. Generally, not everyone uses any desktop, but most do.

Even those, like yours truly, who actually do use a lot of GNU programs are not necessarily bound to GNU: Most important GNU tools are re-implementations of older tools and there are alternate implementations available even in the open- and free-source worlds. Are the GNU variations of e.g. “ls”, “mv”, “awk”, better than the others? Possibly. Would it kill someone to switch? No. Even a switch from Bash to Ksh or Zsh would not even be close to the end of the world. Admittedly, there might be some tools that are so significantly better in the GNU-version that users would be very troubled to switch (gcc?) or are not drop-in replacements (e.g. gnumeric). These, however, typically are either developer tools or have a small user basis for other reasons. Most modern users will not actively use a compiler—or will not need the extras of gcc for their trivial experiments. Most users will opt for a component of an office suite (e.g. LibreOffice) over gnumeric. Etc.

For that matter, even on the command line, my two most extensively used programs (vim, mplayer) are not from GNU either…

Yes, using “Linux” is misleading (but generally understood correctly from context); no, using “GNU/Linux” is not an improvement. On the contrary, “GNU/Linux” is more misleading, shows a great deal of ignorance, and should be avoided in almost all cases*.

*An obvious exception would be a situation where GNU is the core topic and a contrast between GNU-with-the-one-kernel and GNU-with-the-other-kernel is needed.

GNU still plays a very valuable role through providing free-software alternatives for many purposes. This role, however, is not of a type that it justifies “GNU/Linux”.

As an aside, Stallman’s own arguments focus unduly on the free-software aspect: Most of his text seems to argue that GNU is valuable through being more keen of free software than Linux—something which is entirely irrelevant to the question of naming. (In general, Stallman appears to see free software as a quasi-religious concern, trumping everything else in any context.)

Written by michaeleriksson

April 14, 2018 at 4:33 am

Meltdown and Spectre are not the problem

with one comment

Currently, the news reporting in the IT area is dominated by Meltdown and Spectre—two security vulnerabilities that afflict many modern CPUs and pose a very severe threat to at least* data secrecy. The size of the potential impact is demonstrated by the fact that even regular news services are paying close attention.

*From what I have read so far, the direct danger in other regards seems to be small; however, there are indirect dangers, e.g. that the read data includes a clear-text password, which in turn could allow full access to some account or service. Further, my readings on the technical details have not been in-depth and there could be direct dangers that I am still unaware of.

However, they are not themselves the largest problem, being symptoms of the disease(s) rather than the disease it self. That something like this eventually happened with our CPUs, is actually not very surprising (although I would have suspected Intel’s “management engine”, or a similar technology, to be the culprit).

The real problems are the following:

  1. The ever growing complexity of both software and hardware systems: The more complex a system, the harder it is to understand, the more likely to contain errors (including security vulnerabilities), the more likely to display unexpected behaviors, … In addition, fixing problems, once found, is the harder, more time consuming, and likelier to introduce new errors. (As well as a number of problems not necessarily related to computer security, notably the greater effort needed to add new features and make general improvements.)

    In many ways, complexity is the bane of software development (my own field), and when it comes to complicated hardware products, notably CPUs, the situation might actually be worse.

    An old adage in software development is that “any non-trivial program contains at least one bug”. In the modern world, we have to add “any even semi-complex program contains at least one security vulnerability”—and modern programs (and pieces of hardware) are more likely to be hyper-complex than semi-complex…

  2. Security is something rarely prioritized to the degree that it should be, often even not understood. In doubt, “Our program is more secure!” is (still) a weaker sales argument than “Look how many features we have!”, giving software manufacturers strong incentives to throw on more features (and introduce new vulnerabilities) rather than to fix old vulnerabilities or to ensure that old bugs are removed.

    Of course, more features usually also lead to greater complexity…

  3. Generally, although not necessarily in this specific case: A virtual obsession with having everything interfacing with everything else, especially over the Internet (but also e.g. over mechanisms like the Linux D-bus). Such generic and wide-spread interfacing brings more security problems than benefit; for reasons that include a larger interface (implying more possible points of vulnerability), a greater risk to accidentally share private information*, and the opening of doors for external enemies to interact with the software and to deliberately** send data after a successful attack.

    *Be it through technical errors or through the users and software makers having different preferences. For an example of the latter, consider someone trying to document human-rights violations by a dictatorship, and who goes to great length to keep the existence of a particular file secret, including keeping the file on an encrypted USB drive and cleaning up any additional files (e.g. an automatic backup) created during editing. Now say that he opens the file on his computer—and that the corresponding program immediately adds the name and path of the document to an account-wide list of “recently used documents”… (Linux users, even those not using an idiocy like Gnome or KDE, might want to check the file ~/.local/share/recently-used.xbel, should they think that they are immune—and other files of a similar nature are likely present for more polluted systems.)

    **With the particularly perfidious variation of a hostile maker of the original software, who abuses an Internet connection to “phone home” with the user’s private information (cf. Windows 10), or a smart-phone interface to send spam messages to all addresses in the user’s address book, or similar.

To this might, already or in the future, government intervention, restrictions, espionage, whatnot, be added.

The implications are chilling. Consider e.g. the “Internet of things”, “smart homes”, and similar, low benefit* and high risk ideas: Make your light-bulbs, refrigerators, toasters, whatnot, AIs and connect them to the Internet and what will happen? Well, sooner or later one or more of them will be taken over by a hostile entity, be it a hacker or the government, and good-bye privacy (and possibly e.g. money). Or consider trusting a business with a great reputation with your personal data, under the solemn promise that they will never be abused: Well, the business might be truthful, but will it be sufficiently secure for sufficiently long? Will third-parties that legitimately** share the data also be sufficiently secure? Do not bet your life on it—and if you “trust” a dozen different businesses, it is just a matter of time before at least some of the data is leaked. Those of you who follow security related news will have noted a number of major revelations of stolen data being made public on the Internet during the last few years, including several incidents involving Yahoo and millions of Yahoo users.

*While there are specific cases where non-trivial benefits are available, they are in the minority—and even they often come with a disproportional threat to security or privacy. For instance, to look at two commonly cited benefits from this area: Being able to turn the heating in ones apartments up from the office shortly before leaving work, or down from a vacation resort, is a nice feature. Is is more than a nice-to-have, however? For most people, the answer is “no”. Do I actually want my refrigerator to place an order with the store for more milk when it believes that I am running out? Hell no! For one thing, I might not want more milk, e.g. being about to leave for a vacation; for another, I would like to control the circumstance sufficiently well myself, e.g. to avoid that I receive one delivery for (just) milk today, another for (just) bread tomorrow, etc. For that matter, I am far from certain that I would like to have food deliveries be a common occurrence in the first place (for reasons like avoiding profile building and potential additional costs).

**From an ethical point of view, it can be disputed whether this is ever the case; however, it will almost certainly happen anyway, in a manner that the business considers legitimate, the simply truth being that it is very common for large parts of operations to be handled by third-parties. For example, at least in Germany, a private-practice physician almost certainly will have lab work done by an external contractor (who will end up with name, address, and lab results of the patient) and have bills handled by a factoring company (who will end up with name, address, and a fair bit of detail about what took place between patient and physician)—this despite such data being highly confidential. Yes, the patient can refuse the sharing of his data—but then the physician will just refuse taking him on as a patient… To boot, similar information will typically end up with the patient’s insurance company too—or it will refuse to reimburse his costs…

On paper, I might look like a hardware makers dream customer: In the IT business, a major nerd, living behind the keyboard, and earning well. In reality, I am more likely to be a non-customer, to a large part* due to my awareness of the many security issues. For instance, my main use of my smart-phone is as an alarm clock—and I would not dream of installing the one-thousand-and-one apps that various businesses, including banks and public-transport companies, try to shove down the throat of their customers in lieu of a good web-site or reasonably customer support. Indeed, when we compare what can be done with a web-site and with a smart-phone app (in the area of customer service), the app brings precious little benefit, often even a net detriment, for the customer. The business of which he is a customer, on the other hand, has quite a lot to gain, including better possibilities to control the “user experience”, to track the user, to spy on other data present on the device, … (All to the disadvantage of the user.)

*Other parts include that much of the “innovation” put on the market is more-or-less pointless, and that what does bring value will be selling for a fraction of the current price to those with the patience to wait a few years.

Sadly, even with wake-up calls like Meltdown and Spectre, things are likely to grow worse and our opportunity to duck security risks to grow smaller. Twenty years from now, it might not even be possible to buy a refrigerator without an Internet connection…

In the mean time, however, I advice:

  1. My fellow consumers to beware of the dangers and to prefer more low-tech solutions and less data sharing whenever reasonably possible.
  2. My fellow developers to understand the dangers of complexity and try to avoid it and/or reduce its damaging effects, e.g. throw preferring smaller pieces of software/interfaces/whatnot, using a higher degree of modularization, sharing less data between components, …
  3. Businesses to take security and privacy seriously and not to unnecessarily endanger the data or the systems of their customers.
  4. The governments around the world to consider regulations* and penalties to counter the current negative trends and to ensure that security breaches hurt the people who created the vulnerabilities as hard as their customers—and, above all, to lay off idiocies like the Bundestrojaner!

    *I am not a friend of regulation, seeing that it usually does more harm than good. When the stakes are this high, and the ability or willingness to produce secure products so low, then regulation is the smaller of the two evils. (With some reservations for how well or poorly thought-through the regulations are.)

Written by michaeleriksson

January 7, 2018 at 1:08 am

Follow-up: On Firefox and its decline

leave a comment »

Since my post on the decline of Firefox, the developers have released another “great” feature, supposed to solve the speed problem compared to Chrome and other competitors: Electrolysis* (aka. e10s).

*I have no idea how they came up with this misleading name. Possibly, they picked a word at random in a dictionary?

This feature adds considerable multi-threading capability and detaches the GUI from the back-end of the browser, thereby on paper making the browser faster and/or hiding the lags that do occur from the user.

In reality? In one browser installation* (shortly after the feature being activated) I had to disable this feature, because it caused random and unpredictable tab failures several times a day, forcing me to “restart” (I believe the chosen word was) the tab in order to view it again. Even the tabs that did not need to be restarted only displayed again with a lag every time another tab had failed. The net effect was not only to make the browser more error prone, but also to make it slower (on average).

*I have several Firefox (more specifically Tor Browser) installations for different user accounts and with different user settings, including e.g. separate installations for business purposes, private surfing, and my WordPress account. This to reduce both the risk of a security breach and the effects of a breach, should one still occur. As for why the other installations were not affected, this is likely due to the roll-out manner used by Firefox of just activating a feature in existing installations, based on an installation dependent schedule, instead of waiting for the next upgrade. Presumably, all the other installations had received upgrades before being hit by the roll-out. (This approach is both ethically dubious and a poor software practice, because it removes control from the users, even to the point of risking his ability to continue working. What if something goes so wrong that a down-grade or re-install is needed—with no working browser installed? This is very bad for the private user; in a business setting, it could spell disaster.)

Today, I had to deactivate it in another installation: After opening and closing a greater number of tabs, Firefox grew more and more sluggish, often only displaying a page several seconds after I had entered the tab, or showing half a page and then waiting for possibly 5–10 seconds before displaying the rest. This for the third time in possibly a week after my latest upgrade. (I would speculate on some type of memory leak or other problem with poor resource clean up.)

I note that I have never really had a performance problem with Firefox (be it with pure Firefox or the Tor Browser*) before this supposed performance enhancer, possibly because I use few plug-ins and have various forms of active content (including Flash and JavaScript) deactivated per default—as anyone with common sense should. This makes the feature the more dubious, because it has (for natural reasons) taken a very large bite out of the available developer resources—resources that could have been used for something more valuable, e.g. making it possible for plugins like “Classic Theme Restorer” to survive the upcoming XUL removal.

*Not counting the delays that are incurred through the use of Tor. I note that Tor is a component external to the Tor Browser, and that these delays are unrelated to the browser used.

Unfortunately, the supposedly helpful page “about:performance”, which was claimed to show information on tabs and what might be slowing the tabs down, proved entirely useless: The only two tabs for which information was ever displayed were “about:config” and “about:performance” it self…

Oh, and apparently Electrolysis is another plugin killer: The plugin makers have to put in an otherwise unnecessary effort in order to make their plugins compatible, or the plugins will grow useless. Not everyone is keen on doing this, and I wish to recall (from my research around the time of the first round of problems) that some plugins face sufficiently large obstacles that they will be discontinued… (Even the whole XUL thing aside.)

Now, it might well be that Electrolysis will prove to have a net benefit in the long term; however, we are obviously not there yet and it is obvious that the release(s) to a non-alpha/-beta tester setting has been premature.

Written by michaeleriksson

November 6, 2017 at 11:02 pm

The success of bad IT ideas

leave a comment »

I have long been troubled by the many bad ideas that are hyped and/or successful in the extended IT world. This includes things that simply do not make sense, things that are inferior to already existing alternatives, and things that are good for one party but pushed as good for another, …

For instance, I just read an article about Apple and how it is pushing for a new type of biometric identification, “Face ID”—following its existing “Touch ID” and a number of other efforts by various companies. The only positive thing to say about biometric identification is that it is convenient. It is, however, not secure and relying* on it for anything that needs to be kept secure** is extremely foolish; pushing such technologies while misrepresenting the risks is utterly despicable. The main problem with biometric identification is this: Once cracked, the user is permanently screwed. If a password is cracked, he can simply change the password***; if his face is “cracked”****, he can have plastic surgery. Depending on the exact details, some form of hardware or software upgrade might provide a partial remedy, but this brings us to another problem:

*There is, however, nothing wrong with using biometric identification in addition to e.g. a password or a dongle: If someone has the right face and knows the password, he is granted access. No means of authorization is fool proof and combining several can reduce the risks. (Even a long, perfectly random password using a large alphabet could be child’s play if an attacker has the opportunity to install a hidden camera with a good view of the users keyboard.)

**Exactly what type of data and what abilities require what security will depend on the people involved and the details of the data. Business related data should almost always be kept secure, but some of it might e.g. be publicly available through other channels. Private photos are normally not a big deal, but what about those very private photos from the significant other? Or look at what Wikipedia says about Face ID: “It allows users to unlock Apple devices, make purchases in the various Apple digital media stores (the iTunes Store, the App Store, and the iBooks Store), and authenticate Apple Pay online or in apps.” The first might or might not be OK (depending on data present, etc.), the second is not, and the third even less so.

***Depending on what was protected and what abilities came with the password, this might be enough entirely or there might be need for some additional steps, e.g. a reinstall.

****Unlike with passwords, this is not necessarily a case of finding out some piece of hidden information. It can also amount to putting together non-secret pieces of information in such a manner that the biometric identification is fooled. For instance, a face scanner that uses only superficial facial features could be fooled by taking a few photos of the intended victim, using them to re-create the victim’s face on a three-dimensional mask, and then presenting this mask to the scanner. Since its hard to keep a face secret, this scenario amounts to a race between scanner maker and cracker—which the cracker wins by merely having the lead at some point of the race, while the scanner maker must lead every step of the way.

False positives vs. false negatives. It is very hard to reduce false positives without increasing false negatives. For instance, long ago, I read an article about how primitive finger-print* checkers were being extended to not just check the finger print per se but also to check for body temperature: A cold imprint of the finger would no longer work (removed false positive), while a cut-off finger would soon grow useless. However, what happens when the actual owner of the finger comes in from a walk in the cold? Here there is a major risk for a false negative (i.e. an unjustified denial of access). Or what happens if a user of Face ID has a broken nose**? Has to wear bandages until facial burns heal? Is he supposed to wait until his face is back to normal again, before he can access his data, devices, whatnot?

*These morons should watch more TV. If they had, they would have known how idiotic a mere print check is, and how easy it is for a knowledgeable opponent (say the NSA) to by-pass it. Do not expect whatever your lap-top or smart-phone uses to be much more advanced than this. More sophisticated checks require more sophisticated technology, and usually comes with an increase in one or all of cost, space, and weight.

**I am not familiar with the details of Face ID and I cannot guarantee that it will be thrown specifically by a broken nose. The general principle still holds.

Then there is the question of circumvention through abuse of the user: A hostile (say, a robber or a law enforcement agency) could just put the user’s thumb, eye ball, face, whatnot on the detector through use of force. With a password, he might be cowed into surrendering it, but he has the option to refuse even a threat of death, should the data be sufficiently important (say, nuclear launch codes). In the case of law enforcement, I wish to recall, but could be wrong, that not giving out a password is protected by the Fifth Amendment in the U.S., while no such protection is afforded to a finger prints used for unlocking smart-phones.

Another example of a (mostly) idiotic technology is various variations of “cloud”*/** services (as noted recently): This is good for the maker of the cloud service, who now has a greater control of the users’ data and access, has a considerable “lock in” effect, can forget about problems with client-side updates and out-of-date clients, … For the users? Not so much. (Although it can be acceptable for casual, private use—not enterprise/business use, however.) Consider, e.g., an Office-like cloud application intended to replace MS Office. Among the problems in a comparison, we have***:

*Here I speak of third-party clouds. If an enterprise sets up its own cloud structures and proceeds with sufficient care, including e.g. ensuring that own servers are used and that access is per Intranet/VPN (not Internet), we have a different situation.

**The word “cloud” it self is extremely problematic, usually poorly defined, inconsistently used, or even used as a slap-on endorsement to add “coolness” to an existing service. (Sometimes being all-inclusive of anything in the Internet to the point of making it meaningless: If I have a virtual server, I have a virtual server. Why would I blabber about cloud-this and cloud-that? If I access my bank account online, why should I want to speak of “cloud”?) Different takes might be possible based on what exact meaning is intended resp. what sub-aspect is discussed (SOA interactions between different non-interactive applications, e.g.). While I will not attempt an ad hoc definition for this post, I consider the discussion compatible with the typical “buzz word” use, especially in a user-centric setting. (And I base the below on a very specific example.)

***With some reservations for the exact implementation and interface; I assume access/editing per browser below.

  1. There are new potential security holes, including the risk of a man-in-the-middle attack and of various security weaknesses in and around the cloud tool (be they technical, organizational, “social”, whatnot). The latter is critical, because the user is forced to trust the service provider and because the probability of an attack is far greater than for a locally installed piece of software.
  2. If any encryption is provided, it will be controlled by the service provider, thereby both limiting the user and giving the service provider opportunities for abuse. (Note e.g. that many web-based email services have admitted to or been caught at making grossly unethical evaluations of private emails.) If an extra layer of encryption can at all be provided by the user, this will involve more effort. Obviously, with non-local data, the need for encryption is much higher than for local data.
  3. If the Internet is not accessible, neither is the data.
  4. If the service provider is gone (e.g. through service termination), so is the data.
  5. If the user wishes to switch provider/tool/whatnot, he is much worse off than with local data. In a worst case scenario, there is neither a possibility to down-load the data in a suitable form, nor any stand-alone tools that can read them. In a best case scenario, he is subjected to unnecessary efforts.
  6. What about back-ups? The service provider might or might not provide them, but this will be outside the control of the user. At best, he has a button somewhere with “Backup now!”, or the possibility to download data for an own back-up (but then, does he also have the ability to restore from that data?). Customizable backup means will not be available and if the service provider does something wrong, he is screwed.
  7. What about version control? Notably, if I have a Git/SVN/Perforce/… repository for everything else I do, I would like my documents there, not in some other tool by the service provider—if one is available at all.
  8. What about sharing data or collaborating? Either I will need yet another account (if the service provider supports this at all) for every team member or I will sloppily have to work with a common account.

To boot, web-based services usually come with restrictions on what browsers, browser versions, and browser settings are supported, forcing additional compromises on the users.

Yet another example is Bitcoin: A Bitcoin has value simply for the fact that some irrational people feel that it should have value and are willing to accept it as tender. When that irrationality wears off, they all become valueless. Ditto if Bitcoin is supplanted by another variation on the same theme that proves more popular.

In contrast, fiat money (e.g. the Euro or the modern USD) has value because the respective government enforces it: Merchants, e.g., are legally obliged, with only minor restrictions, to accept the local fiat money. On the outside, a merchant can disagree about how much e.g. a Euro should be worth in terms of whatever he is selling, and raise his prices—but if he does so by too much, a lack of customers will ruin him.

Similarly, older currencies that were on the gold (silver, whatnot) standard, or were actually made of a suitable metal, had a value independent of themselves and did not need an external enforcer or any type of convention. True, if everyone had suddenly agreed that gold was severely over-valued (compared to e.g. bread), the value of a gold-standard USD would have tanked correspondingly. However, gold* is something real, it has practical uses, and it has proved enduringly popular—we might disagree about how much gold is worth, but it indisputably is worth something. A Bitcoin is just proof that someone, somewhere has performed a calculation that has no other practical use than to create Bitcoins…

*Of course, it does not have to be gold. Barring practical reasons, it could equally be sand or bread. The money-issuing bank guarantees to at any time give out ten pounds of sand or a loaf of bread for a dollar—and we have the sand resp. bread standard. (The gold standard almost certainly arose due to the importance of gold coins and/or the wish to match non-gold coins/bills to the gold coins of old. The original use of gold as a physical material was simply its consistently high valuation for a comparably small quantity.)

As an aside, the above are all ideas that are objectively bad, no matter how they are twisted and turned. This is not to be confused with some other things I like to complain about, e.g. the idiocy of various social media of the “Please look at my pictures from last night’s party!” or “Please pay attention to me while I do something we all do every day!” type. No matter how hard it is for me to understand why there is a market for such services, it is quite clear that the market is there and that it is not artificially created. Catering to that market is legitimate. In contrast, in as far as the above hypes have a market, it is mostly through people being duped.

(However, if we are more specific, I would e.g. condemn Facebook as an attempt to create a limiting and proprietary Internet-within-the-Internet, and as having an abusive agenda. A more independent build-your-own-website kit, possibly in combination with RSS or an external notification or aggregation service following a standardized protocol would be a much better way to satisfy the market from a user, societal, and technological point of view.)

Written by michaeleriksson

September 18, 2017 at 11:37 pm