Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘software

Subversion misbehaving with config files (and wicd)

leave a comment »

I have repeatedly written about idiotic software behavior, including the presumptuous creation of files/directories without asking for permission (cf. at least [1] and [2]). In the wake of my adventures with Subversion ([3], [4]), I can point to yet another horrifyingly incompetent case:

Performing a backup,* I just saw several config files and directories for Subversion flash by—for a user account** that simply should not have had them.

*I use rsync in a sufficiently verbose mode that I can check on the progress from time to time.

**I use multiple accounts for different purposes. (As I notice during proof-reading, I sometimes use “user” to refer to a physical user, sometimes to a user account. Caveat lector.)

I went into the account to check, and they were indeed there: a total of six directories, a longish README file, and two long config files, for more than 20 kilobytes (!) of space. (And this, of course, is not the only account with these files and directories.)

Why were they there?

I had, at some point during my imports (cf. [3]), issued a single “svn –version” in a convenient window, which just happened to belong to this user. Now, adding even config files in such a blanket manner, even for a regular command, is unacceptable (but, regrettably, increasingly common). As I noted in a footnote to [1]:

For instance, creating a config file is only needed when the user actually changes something from the default config (and if he wants his config persistent, he should expect a file creation); before that it might be a convenience for the application, but nothing more.

Moreover, as I noted in [2]:

[A very similar misbehavior of “skeleton” files] simply does not make sense: Configuration settings should either be common, in which case they belong in global files, not in individual per-user copies; or they should be individual, and then the individual user should make the corresponding settings manually. In neither case does it make sense to copy files automatically.

(Additional motivation is present in [2]. Also see excursion.)

However, what does “svn –version” actually do? Its purpose is just to output the current version—nothing more, nothing less. There is no legitimate reason to access any other functionality. Even if we assume, strictly for the sake of argument, that the file creation had been acceptable for regular use, it would have been unacceptable here.

Then there is the question what these files actually do. Well, a cursory look through the two config files finds nothing. I might have missed something, but any line that seems significant is commented out,* implying that the only possible purpose the config files could have is to make it easier for the user to later add own configuration in the two files—which, descending into a complete and utter idiocy, does not even reach the already too low justification for skeleton files. In a next step, without these config files, the directories and the README** file are pointless. Even if we ignore the preceding issues with the violation of the user’s right to control over his own files, the idiocy that is skeleton files, and the difference between “svn –version” and more active commands, the addition would, then, remain idiotic.

*There are some section headings, but they likely have no impact on their own.

**And the README is somewhat additionally absurd in light of Subversion, at least in my installation, not coming with proper man pages, instead relying on the less comfortable “svn help” (with variations like “svnadmin help” for related tools). Provide a suitable man page and put any non-trivial README contents there!

Amateur hour!

(How to do it better? Firstly, per [1] and [2], do not rely on such config files being present. Secondly, consider options like (a) asking whether it is acceptable to create them and (b) adding some possibility, e.g. a command-line switch, for the user to add them at such time as he sees fit.)

At least, however, “svn –version” did not refuse to give out information when such directories and files could not be created. (I deliberately checked.) Unfortunately, there are other tools that are idiotic in this regard. For instance, I was long a user of “wicd” (a tool to connect a computer to a WIFI-router/-hotspot/-whatnot and, thereby, the Internet). At one point, I found myself on a computer with a root partition mounted read-only, tried to connect to the Internet to check for a solution, and was met with errors.* After some debugging, I found that wicd did something completely harebrained, namely to read in, normalize,** and write back one or several config files—and to treat a failure of the write as a fatal error, even when the config files were in order to begin with*** and nothing should have stood in way of the connection. I can guarantee that nothing truly stood in the way of the connection, because I replaced**** the corresponding script with my own version, which did not perform the write, and everything worked well. (In the extended family, similar problems include websites that refuse access to even help and contact pages unless the user turns on one or more of JavaScript, cookies, and, in the past, Flash.)

*To my recollection, there was no actual error message, just a failure to connect, which is much worse than failing with an error message. However, I could misremember after these few years.

**I do not remember the details, but I was under the impression at the time that the potential benefit of this was next to nil in the first place. There was almost certainly an aspect of (deliberately or incidentally) destroying at least some user/admin changes to the config files, which would move the behavior from redundant or stupid to inexcusable—the will of the user/admin should always take precedence in questions like configuration.

***Indeed, as my debugging showed, wicd, in this case, tried to write back a file that was identical to what was already in the file system, as no other entity had changed it since the last normalization—or, likely, since dozens upon dozens of normalizations ago.

****It self a horror, because of that read-only mount. I do not remember how I circumvented this, but it might have involved duplication onto a USB stick or mounting some RAM (e.g. with tmpfs) in the right place in the file system. (The latter is a very helpful trick, which I, during my Debian days, used to resolve one of the problems discussed in [5]: Forget about about “chattr” and just mount a temporary file system over the likes of /usr/share/applications. The installer can now write to/pollute the directory, but as soon as the file system is unmounted, the pollution is gone.)

Excursion on repository-wide configuration:
My first draft contained:

In the specific case of Subversion, additional doubts can be cast on the automatic presence of config files on the user level: A user might have multiple repositories, might have radically different preferences for these repositories, and adding config files in the repository directory when the repository was created would have been unobjectionable.

I removed this from the main text, as it is a little shortsighted and too focused on my own situation as the single user of multiple repositories. Equally, of course, a repository might have multiple users. Some configuration settings might be suitable for this, others might not. However, it is possible that some more refined version of the same idea and/or some variation of this on the workspace level would work. Looking at git (cf. parts of [1]),* chances are that it would work well, because any individual repository is single-user, and the collaborative aspect takes place through “push” and “pull” between repositories (while Subversion has a central repository with one or more workspaces per user).

*However, I have not investigated whether git is similarly misbehaving as Subversion.

Likewise, a user might use different versions of Subversion. For historical reasons, I have two versions of Subversion installed; other systems could conceivably have more; and it is conceivable that someone might legitimately use different versions with the same user account. What if the respective config files are not compatible enough? This is, admittedly, a potential issue with many tools, but the presumption of automatic creation makes it the worse, e.g. because the user might not even know that the config files exist (unlike config files that he has added).

Excursion on skeleton files vs. Subversion’s misbehavior:
Here we see at least two other problems. Firstly, there is a non-trivial risk that skeleton files and application-created files (like Subversion’s) collide. Consider e.g. cases like version 1.x of a tool not having any such files and an administrator configuring skeleton files for the tool, while a later version 2.x does push its own files. Who should be given preference? Secondly, this is yet another case of evermore mechanisms being added to circumvent the will of the user, as even someone who has removed the skeleton nonsense will now be faced with the same problem from another direction, in a manner that circumvents the fact that the skeleton files are no more, and which he cannot trivially prevent. (Note similarly, how killing sudo is only a partial help to keep up security, as there still is e.g. polkit and dbus, both of which potentially introduce security holes, and definitely make security harder to understand, survey, and control—and does so for a similar purpose of convenience-at-the-cost-of-security.)

Excursion on [1] and Poppler:
In [1], I complain about Poppler screwing up xpdf. In light of later information, it appears that Debian developers screwed up xpdf using Poppler, which is a different story. (It is still a screw up, but the blame is shifted and the problem is not automatically present on systems outside the Debian family.)

Advertisement

Written by michaeleriksson

March 25, 2023 at 11:59 pm

Time to abandon Wikipedia? / Another site destroyed by poor design

with one comment

As I have noted in the past, redesigns of websites almost invariably turn out for the worse—it would have been better to stick with the earlier design. Indeed, there are some websites where the usability, readability, or whatnot was worsened to such a degree that I decided to abandon them, including FML ([1]), Etymonline, and the Daily Sceptic ([2]).* Also see [2] for a little more on the general issue.

*Links go to prior discussions. Linking to the site in question seems counterproductive.

Now, however, there might be a truly horrible problem—Wikipedia!

I first saw a weird misdesign some months ago in the French version, but, while being puzzled about the idiotic design, I did not dwell on the issue. (While I do use French Wikipedia, it is a far from everyday occurrence.)

Over the last few days, I have, again and again, seen a similar misdesign on English Wikipedia. Give or take, about half my visits have given me a page with the sensible old design—the rest, something absurd.

Apart from a different look-and-feel of the main page contents, which might or might not be an acquirable taste,* there are at least three overlapping** issues, with the suspicion that I would find more on a deeper investigation:

*It is very important to keep the difference between the misdesigned and the merely new, unaccustomed, different, whatnot in mind. That said, I am not an immediate fan of the new look-and-feel.

**Overlapping to the degree that I could have drawn the borders between the items differently or divided them into a different number of issues.

  1. The extensive old left-hand menu has been removed. Some of the entries appear to have no new correspondent, while the listing of language-versions has been moved to some type of separate element.* This has the considerable disadvantage, in general, that it is impossible to get a good overview of the available language-versions at a glance** and that it requires more steps to find the links to other language-versions. From a more personal point of view, I note that the other languages are much harder to get at without a mouse*** than before and that one common personal use-case now has so much additional overhead that it is not worth the bother: when I look something up in English, I often check the corresponding Swedish and German names merely by searching for “sve” and “deu”, respectively, and seeing what link is displayed (ditto, m.m., when I look something up in Swedish or German).

    *And a highly misdesigned one at that: It looks like a button but behaves likes a select element and/or an improvised menu, thereby violating one of the fundamental rules of design—element behavior should be consistent with looks. (Unfortunately, an increasingly common problem.)

    **A common use-case for less proficient English speakers is to open the English page for an unknown word and then to navigate to a native or otherwise better known language in order to read both pages in parallel or otherwise rely less on the English one. (While this does not apply to me personally, I do use the same approach with e.g. the aforementioned French.) Note the risk of building frustration when, for page after page, there is not just an increase in effort—but also a considerable risk that effort is put in in vain, as the lack of the right language-version only becomes detectable after effort has already been put in.

    ***I have increasingly abandoned mouse use, do not usually have one attached to my computer, and would, were it not for the many tools that are built under the assumption of a mouse, recommend others to follow my example. Not using a mouse is easier on the fingers and with the right tools faster and more comfortable.

  2. The left-hand side is now occupied with what appears to be the table of contents, which has no place there, is rarely helpful at all and/or is rarely helpful except for a first overview or first navigation (implying that a constant display is pointless), and which takes so much more space horizontally that the main text is both reduced in width and artificially shifted sideways. This is highly sub-optimal on even a 16:9 display—and could be a major problem with narrower dimensions. (A smart-phone used to show the same design, e.g., might have considerably more table of contents than main text on the screen, if held upright. The user would then be forced to turn the smart-phone sideways—a decision that should be his, not Wikipedia’s.)

    A complication that I have not investigated is what happens when the table of contents grows unusually wide, but the result is bound to be either an incomplete display of the table of contents (making the pointless even more so) or an even further reduction-in-width and/or shift of the main contents.

  3. The implementation appears to use some variation of “position: fixed” or “position: sticky”. Both are illegitimate, should never have been invented, and should never or only in very, very rare exceptions be used by a professional web-designer. Also see [1], especially for a discussion of “position: fixed” with regard to top menus.

What to do now? I have not made up my own mind yet, but in light of the deteriorating quality of and increasing Leftist agenda pushing in the contents of Wikipedia (cf. e.g. [3]; things have grown even worse since then), it might well be time to abandon the English version. The German and Swedish versions still (knock-on-wood) have an older interface and are not as bad in terms of Leftist distortions. For English contents, a source like infogalactic.com might be useful: this is a fork of an older version of Wikipedia, it still has the old interface, and it has to some (but insufficient) degree been edited to counter existing Leftist distortions. On the downside, it is sometimes out-of-date and receives less new content. (Other replacement candidates exist, but I have not yet had the time to investigate them.)

For those wishing to remain with Wikipedia, some experiments with “skins” might help, but these require the user to be logged in, which is idiotic for reading (as opposed to editing), as it allows Wikipedia to track any and all readings on a personal basis. It might also be counterproductive for Tor users. A URL parameter “useskin” is available, but will only affect the page immediately called—it is not propagated when links are opened, which makes it borderline useless. In both cases, the user is still ultimately dependent on what customization Wikipedia allows, which, going by general software/website trends, is likely to be less and less over time. The mobile* version is slightly better than the regular/desktop version, but not by much.

*Replace “en.wikipedia” with “en.m.wikipedia”.

There might or might not be a solution available over userCSS (or whatever the local browser equivalent is called); however, I have not investigated this, the amount of work could be out of proportion to the benefit, and even so trivial a change as a renamed element could cause a solution to fail again. Moreover, there is no guarantee that any given browser will support it.

Equally, there might or might not be a solution over some type of external reader program. This, too, I have not yet investigated.

(Of course, any workaround for the design issues will still leave the content problems. Cf. [3] again.)

Note on date and state:
The time of writing is January 20, 2023, and the text reflects the state at the time of writing. The future is likely to bring changes.

Written by michaeleriksson

January 20, 2023 at 4:11 pm

Some issues with spellcheckers

with one comment

There are many complications around and problems with spellcheckers* and similar tools. Some of them will be discussed below.

*See excursions for some notes on spellcheckers vs. dictionaries, on traditional vs. spellchecker dictionaries, and on the spellchecker that I use.

  1. Spellcheckers provide an indirect control of language, which can not only lead to certain trends winning out in an arbitrary manner, e.g. because a particularly important spellchecker picked one spelling or whatnot over another, but also create a risk of deliberate manipulation, e.g. in that the Politically Correct can attack certain words by having them removed from the dictionary of a spellchecker. Ditto, m.m., by keeping them, but marking them as “offensive”, “racist”, or similar (provided that the spellchecker has a corresponding functionality). The analogous threat exists, and is likely greater, through grammar and style checkers; however, my experiences with such tools are extremely limited.
  2. Spellchecking technology is usually strongly based on the English language, leading to implicit assumptions of how a spellchecker “should” work that do not necessary hold for other languages. For instance, in German, I often have problems with compounds: Take “Kirschtörtchen”* (“cherry tart”), a compound based on “Kirsche”** (“cherry”), “Torte” (“cake”), and the diminutive suffix “chen”. While each of the components is individually recognized, the whole is not. Why? The spellchecker has no true compounding mechanism, as it relies on the English*** rule of compounding with spaces or hyphens—and “cherry tart” (as well as “cherry-tart”) is indeed implicitly recognized, as neither “cherry” nor “tart” raises objections. In German, however, “Kirschtörtchen” is either in the dictionary or it is not, and it is either accepted or rejected accordingly.

    *The example is deliberately chosen to simultaneously be immediately understandable in the respective language and rare in actual use. The latter with the idea that the German version should (and does) fail with my spellchecker, and stand a considerable chance of failing in other spellcheckers.

    **There is also a separate word “Kirsch”, which refers to an alcoholic beverage based on cherries—the switch from “Kirsche” to “Kirsch” is not the issue. Neither is the switch from “Torte” to “Tört” respectively “Törtchen”, as “Törtchen” (“tart”) is also recognized as a separate word.

    ***With a proportionately small number of exceptions, e.g. “stonewall”.

    While this leads to many false negatives* in German, that words that actually “exist”** are rejected, it can be argued that English is weak to false positives, e.g. in that a “fox tart” will pass the test, regardless of whether something like a fox tart actually exists. In contrast, the German “Fuchstörtchen” would be rejected, as any prior use has been so extraordinarily rare that even a very exhaustive dictionary would be unlikely to include it.

    *From the point of view that a “correct” word passes, an “incorrect” fails the test. For the reverse perspective, the reverse terminology applies.

    **Philosophically, there might be some room to debate what compounds/words, in some sense, exist, e.g. whether a reasonable neologism or new compound exists or whether only words with a prior history do. That is off topic for today, however.

    (Would these problems go away, if a spellchecker was written with a greater awareness of, say, German? No, because the various rules and exceptions can be quite tricky. Chances are that things would be considerably better, however.)

  3. Even English spellcheckers can be surprisingly stupid, however. For instance, I often find myself using plurals that are perfectly legitimate but happen to be rare—and see my spellchecker complain, because these plurals have not been explicitly added to the dictionary. Here the solution would be trivial: anyone who constructs a dictionary should simply, as a matter of course and excepting only words where no plural exists,* add the correct plural form together with the singular form. This especially as there are exceptions to the simple add-an-s rule, which might trip up the user.

    *Even these cases can be tricky, however, as many words that might superficially seem to lack a plural, or a valid use-case for a plural, do have one. (And I suspect that such mistakes have been behind many or most of these missing plurals.) Consider “people”: superficially, it might seem that there is no point to a plural, e.g. by reasoning that “we the people” are everyone or that “people” is a (misguided and highly irregular) plural in its own right (of “person”). In reality, there are plenty of peoples, including the English people, the French people, the Spanish people, … and speaking of “peoples” in such a context is perfectly acceptable—as earlier in this sentence or in a phrase like “the peoples of Europe”. (Note that the latter differs in meaning from “the people of Europe”.)

  4. Many spellcheckers and/or dictionaries seem to miss half the point of correctness, namely that own use should be consistent. For instance, by default my spellchecker allows both “labor” and “labour”, both “color” and “colour”, etc., which is contrary to a reasonable expectation (and particularly problematic for a non-native speaker, who might be very inconsistent in his use and/or uncertain of the rules). The same applies to “-ize” vs. “-ise”, or maybe even “-z-” vs. “-s-”,* which is a mess to begin with and which is a minefield for those who are not native speakers. Certainly, the default should be to pick one of British English and U.S. English (or e.g. Australian English) and stick to that with consistency.** Moreover, internally acceptable alternate spellings should be clearly marked as such—not just blindly accepted.

    *Currently, both “organise” and “organize” (with variations, e.g. “organisation”/“organization”) are accepted.

    **Yes, a good spellchecker should have support for different language versions—and mine does. However, this should be reflected in an explicit choice, e.g. a user setting or command-line option that specifies the preferred version. Absent such a preference, a sensible and consistent default version should be picked—not a union of all conceivable versions.

    To make matters worse, the choices are in part idiotic. For instance, by default, “millennia” is rejected while “millenniums” is accepted—the exact opposite of a reasonable expectation.

  5. A given text might be exposed to more than one spellchecker and/or dictionary, and when these do not agree there can be issues. Consider e.g. that several co-workers collaborate on a text, possibly leading to conflicting “corrections” (this especially if a tool with some type of autocorrect is used, implying that the co-workers need not even realize that they have an issue). Or consider a single author revisiting a text that he wrote ten years earlier, or a single author upgrading his software and being caught by an “improved” dictionary, or dictionaries being switched behind his back (as noted in an excursion).

    While I am not yet aware of a case, a scenario seems plausible in which some counterpart tries to be “helpful” by interfering with an already completed text. For instance, if someone writes a text offline (which is the wise thing to do) and then tries to post it online, some meddling browser or website might “helpfully” refuse to accept it because “unresolved spelling issues” are present, forcing the user to make changes either to the text or to the local dictionary (if at all possible), although the text was already finalized with the blessing of another spellchecker—and one presumably configured to match the author’s preferences better than the interfering one. (In a similar vein, note previous texts on interference by W-rdpr-ss when users post by email.)

  6. All in all, the best alternative might be to start completely or almost completely from scratch, with a dictionary that is either empty or only contains the most common, basic, and undisputed words (e.g. the likes of “a”, “an”, “for”, “to”, “me”, … in English), and then to add a single verified spelling for every word upon its first use and upon its first use only. However, this implies a lot of additional work, might lead to problems when migrating from one tool to the next, and might not be supported by all spellcheckers.

Excursion on “my” spellchecker:
I currently use Vimspell, the spellchecker internal to the editor Vim. I switched from using aspell* to Vimspell a few years ago, while using the Debian distribution of Linux. This worked excellently and I have no recollection of seeing e.g. the “color” vs. “colour” issue (cf. above) during these days. Sometime around last New Year’s, I switched from Debian to Gentoo. Gentoo, apparently, has another setup, with different pre-installed Vim dictionaries,**/*** and I have grown aware of dictionary-related issues by and by since then. (This awareness was delayed by my, relatively speaking, limited writing for the first half or more of the year.) I also have the strong suspicion that my apparent “preferred spelling” of some few words has inadvertently changed, because I am sufficiently uncertain of the spelling as to rely on the spellchecker for corrections, notably relating to “-ise” vs. “-ize”. We might, e.g., have a situation where I used to spontaneously write a word with “-ise”, saw it marked as incorrect with the “old” dictionary, and changed it to “-ize”, but today spontaneously write it with “-ise”, see no correction, and let the “-ise” stand.

*An external spellchecker usually applied after a text is otherwise finished. Vimspell, in contrast, works on the more common paradigm of showing errors during the regular writing/editing.

**I leave unstated whether Vim, Debian, or Gentoo is to blame.

***Writing this, I have a very vague recollection of having to manually install something or other, but I could misremember. I will certainly overhaul my dictionaries and setup in detail before resuming any writing of fiction, where consistency both between and, above all, within works is much more important than in the blogosphere.

Excursion on spellcheckers vs. dictionaries:
Typically, almost by necessity, there is a division into components. In terms of what words are recognized or not recognized, in a typical implementation, the most important is the dictionary or dictionaries used, and it is very often the case that the same spellchecker would give a better result simply by replacing or amending the dictionary. (Which is typically exactly what happens when a user marks a word as correct to override the claims of the spellchecker.) However, there might also be rules that are built in or configured separately, and different spellcheckers can differ in how they approach dictionaries.* Correspondingly, it can be hard to draw the line between the guilty and the innocent. I have gone with “dictionary” when this would usually clearly be correct, but otherwise tended towards “spellchecker”.

*We could, for instance, have a rule in English that provides an “s-plural” to all nouns, unless an exception is configured, e.g. in that “kid” gives us “kids”, because no exception is given, but “child” give us “children” because this exception has been explicitly added. (Whether some variation of this rule is commonly used, I do not know. The example is only intended to illustrate the possibility of rule–dictionary interaction.) Also note the above discussion of compounds and the lack of a rule mechanism.

Excursion on “traditional” dictionaries and spellchecker dictionaries:
A “dictionary” as intended for a spellchecker, or some other type of comparatively simple automatic processing,* is very different in typical contents from a traditional intended-for-humans dictionary. In many cases, it is barely more than a list of words,** and the use of “dictionary” should be viewed with some caution. A very, very significant difference, and one to bear in mind when considering the influence of a dictionary, is that a traditional dictionary might base on years or decades of work by specialists, while a spellchecker dictionary will typically only reach that bar if*** the contents are taken from a traditional dictionary—and even then there is an issue of with what competence, with what effort, and with what honesty the transfer is made. Now, if you wanted to manipulate language use, would you rather exert pressure on a group of specialists to change a recommendation that has been present in ten editions of their dictionary or on some software CEO, who can simply dictate that the next version of his dictionary will contain a certain change?

*A pass-phrase generator, for instance.

**If not necessarily stored as a list of words.

***And for reasons of copyright, this might be a very big “if”, especially if we look at the FOSS world.

Written by michaeleriksson

December 11, 2022 at 5:57 pm

Yet another day of everything going wrong

leave a comment »

As I have noted repeatedly in the past, there are days when it seems like everything that could go wrong does go wrong. Today is one of those days.

In addition to some of what is mentioned in earlier texts from today,* I note that I have a recurring problem with USB and my Internet connection: I use the Internet connection of my smartphone, attached per USB, as the source of Internet for my computer. Every once in a while, the computer begins to misbehave, going through a phase of dropping and re-adding the corresponding USB device, and, as a consequence, dropping and restoring the Internet connection.

*My earlier prediction that there would be little writing during early December has proved incorrect…

Now, firstly, this dropping and re-adding is spurious, as there normally really is no problem or only a problem of such fleeting character that the device is re-added virtually immediately. However, during such a phase, the computer* stubbornly drops and re-adds the fully functioning and well-connected device.

*I have had similar issues with other devices in the past, which points to the computer, not the device, as the source of the trouble. This includes external hard drives (although, interestingly, not since my switch from Debian to Gentoo), which then might need a complete unmount and remount to work, despite there being nothing wrong.

Sometimes things do not work out, and the device is not properly re-added, while the log files complain about an allegedly poor USB cable. Here, when renewed attempts really would make sense, say, by trying again in five minutes, no attempts follow. Once there is one single failure, the cycle of dropping and re-adding is (usually) ended until I physically unplug the USB cable and re-insert it. (After which things work perfectly for days or hours, or the cycle of spurious drops and re-adds begins again, or the complaint about the USB cable reappears.)

In other words: if at first you do succeed, try and try again, until you fail; if you fail, never try again… Or: if it’s not broken, break it; if it’s broken, don’t fix it.

The last time around, I bought a new-but-cheap* USB cable, paying great attention that it would be marked as suitable for data transfer, and got close to six months of problem-free use. Last week, I noticed the same spurious cycle again, and bought another new cable. However, until today, when the problems grew out of hand, I did not get around to replace the cable. As I did replace the cable, expecting another handful of months of problem-free use—the problems continued! If anything, they grew worse…

*3 Euro, if I recall correctly.

As if this is not enough, Alpine, my email client, misbehaves when the Internet connection goes missing: When attempting to send with a missing Internet connection, a sane client should give an error message that, e.g., “Sending failed. Please check your Internet connection.”. What does Alpine do? It simply (and silently!) switches to an external editor* to edit the email—an action that is not only pointless but will leave the typical user highly confused. (I have the advantage of having seen this idiocy in the past.) Moreover, a second attempt to send cannot be made without exiting the editor, losing time unnecessarily—especially, when the second, third, fourth, … attempt also fails.

*I have Alpine configured to use Vim as an external editor. I do not know how Alpine would behave if no external editor was configured.

In the past, before I switched to a more mail-drop-y configuration, it was even worse, as the client would just freeze for minutes at a time, if an open connection to an external mailbox failed due to the Internet connection dropping, and never recovering, even when the Internet connection returned.* After these few minutes, I would briefly have the opportunity to break the connection—and missing this brief window led to another few minutes of a frozen client, before the opportunity re-appeared. Note that there was no way, short of killing the entire client, to break the connection outside these windows, that there (to my knowledge) was no way to reconnect the client without a restart, should the break succeed, and, again, that there was no automatic reconnect once the Internet connection returned. Amateurs!

*Note that different protocols are used for sending emails and interacting with mailboxes. The two activities are surprisingly independent.

(I have written about idiocies in Alpine in the past. A problem might be that it originated as a non-FOSS software, and still carries issues from those days that no-one has bothered to fix, because they appear to rarely.)

Then there has been repeated issues with my spell checker (unrelated to the Internet), but I will save that for another day.

Written by michaeleriksson

December 8, 2022 at 2:22 pm

Similarities between the world of software and the world of government

leave a comment »

An early impetus for my texts on choice was a number of software annoyances and how I noted quite a few disturbing parallels between the world of software and the world of government (likely with a number of analogies in other areas too).

The first that I wrote on the topic was list of key phrases to be elaborated upon, but then I ended up writing those other texts instead and the key phrases and a preliminary introduction spent some two months in an open editor with no further work.

The original idea remains interesting, however, and has an overlap with an idiotic annoyance from earlier today.* As a comprise, I will give a minimally expanded version of the original set of key phrases, with key phrase pre-colon, new explanation post-colon. (The originally intended treatment would have been more in-depth and would likely have included a number of additional items that never made it even to the key-phrase stage.)

*The government is being abusive and the software makers have implemented functionality to let it be abusive. (As I conclude from some Internet searches in the interim, there is deliberately no way to block “presidential alerts” using standard software, which is an inexcusable act of user hostility from the software makers. I should have full control over my own devices—not the government, Google, or some other user-/citizen-hostile entity.)

  1. Bigger is better: In the eyes of the software industry, bigger software is better than slim and efficient software; big government, in the eyes of governments, is better than small government. (In both cases, the opposite, by any reasonable standard, holds true.)
  2. Less control: Similarly, the opinions are that the less control the user/citizen has, the better.
  3. Decision detached from the users/citizens—a central pseudo-elite knows better: Self-explanatory and strongly over-lapping with the previous item.
  4. Control of data/money: Both the software industry (and especially various Internet services) and the government are very keen on controlling the users’/citizens’ data and, if they can, money. In many ways, users/citizens are nothing more than a source of data and money, with no rights and interests of their own.
  5. Too much focus on surface: Software (and websites, etc.) is developed too much to look good, be presentable, be marketing friendly, etc.; too little to actually work well. Government is the same.
  6. Low usability: Here I am a little uncertain what I intended. Software is, of course, plagued by poor usability, but the analogy with governments, apart from government software, is not entirely obvious. Chances are that I referred, metaphorically, to excessive bureaucracy, the often undue amount of work put onto the citizen, and similar. This point is very valid, but is not an ideal match for the original key phrase.
  7. Gets worse over time: both software and governments tends to grow worse over time. (Also see e.g. [1] and [2].)
  8. Forced use of certain shitty tools: both with software and with government (and, especially, government software) users are forced to use inferior tools. (Also see [1] and [2], again.)
  9. Laws on duties of users/citizens instead of business/government: Laws in a Rechtsstaat should be there to protect the citizens from (a) the government, (b) other citizens. In reality, they more often aim to disable the citizen in favor of the government. Terms and conditions, and similar documents, have the same flaw—they should regulate what duties a business has in return for the customer’s payment; instead, they remove such duties and detail duties that the paying customer has towards the business. (Also see e.g. parts of [3].)

Written by michaeleriksson

December 8, 2022 at 1:02 pm

Browsers and lack of choice

with one comment

Related to the families of texts on choice (note [1] and follow-ups) resp. computer annoyances of various kinds, there is an interesting (and extremely depressing) drift towards forced use of inferior browsers.

This drift results from a two-pronged attack* of declining browser quality and a need to remain with up-to-date browsers and a limited range of preferred-by-websites browsers.

*Without implications of a deliberate action.

First, declining browsers: Firefox is a splendid example, which has over the last ten-or-so years grown incrementally worse, dropped features that once made it great, added lesser features, worsened the interface, whatnot. (Cf. [2] and what by now must be more than half a dozen other texts.) In particular, Firefox has gone ever more in the direction of eliminating user choice and forcing users to live with the preferences of the makers—while it once was quite good in this area.* Chromium** and, by implication, Chrome are horrors in usability and interface—so absurd that I feel like snapping after even a five-minute experiment. Other browsers that I have tried have either fallen into similar traps, are using too old standards, are not available on Linux,*** or are otherwise unsuitable for generic purposes.****

*But by no means perfect. Age-old problems include a poor handling of config files, the idiocy of about:config, and the lack of a good key-mapping mechanism—something many other tools had mastered in the 1980s.

**An (at least approximately) FOSS version of Chrome, which should be almost equivalent in functionality, but with less privacy intrusions and other problems.

***Use another OS? That would worsen the problem discussed in this text, as I am no longer just forced to use certain browsers but also certain OSes.

****This includes e.g. W3m, a text-based browser that runs well in a text terminal and can handle many websites excellently, but which falls flat on its face with sites heavy on graphics, JavaScript, DHTML, and whatnot. (Also cf. the second prong.)

Secondly, the need to remain up-to-date (etc.):

HTML and related languages and technologies are nominally well-defined, and any standards-conformant graphical browser should display any web page correctly, including that any and all “active contents” and control elements work as intended. Nevertheless, this is not the case, as various websites* use non-standard features or deliberately and artificially show error messages with “too old” browsers or browsers outside a very limited selection (e.g. Chrome**/Firefox/Edge)—even when they actually would have worked without this artificial error message. Notably, these non-standard features are almost invariably pointless, either because the same thing can be done with standard features, or because the purposes achieved bring no value to the user. For instance, my first steps with online banking might have taken place some twenty years ago—and it worked well with the technology of twenty years ago. Today’s online banking has no true value added, in some ways works worse, and still requires very new versions of these few browsers…

*Immediate personal problems for me stem from the websites for the “German IRS”-tool Elster, my online banking, and W-rdpr-ss, which have all forced me to perform unwanted updates. Elster is particularly perfidious as the German government dictates the use of this tool for tasks like filing taxes—a certain tool use is ensured by the force of law.

**Usually, without mention of Chromium, despite Chromium being the lesser evil for a sane user.

The result? Poorly programmed websites force users to constantly upgrade browsers (and limit them to that small selection), while the sinking quality of browsers makes every upgrade painful. Browser-wise, the world is worse off than ten years ago. Ditto, in terms of websites.

To some degree, this problem can be lessened by having several browser installations and using an older version or a browser outside the selected few for more sensible websites. However, there is a continual lessening of the websites that work well and chances are that the solution is temporary. A particular risk is that the “selected few” are eventually reduced to a single browser (likely, Chrome), bringing us back to the millennium hell of “Optimized for [browser A] in resolution [X times Y]—and don’t you dare visit with anything else!”.

Excursion on security:
But is it not better to use the newest versions for security reasons? Dubious, considering the track record of browser makers and how low security is prioritized. Chances are that the last version of an “extended support release” from five years ago will be more secure than the fresh-off-the-press version from yesterday.* More importantly, browser security issues stem largely from various active contents, notably JavaScript, and browsing with JavaScript off should be the default for any sensible user.** However, in as far as the answer is “yes”, this creates yet another problem—the user now has the choice between using a less secure browser and a worse browser.

*Indeed, it used to be a recommendation among experienced users to not install the latest version of anything until some sufficient bug-stability had been reached: leave the 4.0.0 to the beginners and wait for 4.0.5! However, with the mixture of automatic and forced updates, perverted version schemes, and (often) lack of true major and minor versions, this has grown near impossible—everyone is an alpha tester.

**Which, again, grows harder and harder as evermore incompetent websites use JavaScript to implement functionality that either brings no additional value or could be done as well without JavaScript. Indeed, I strongly suspect that many of them use such features as a mere excuse to force an enabling of JavaScript in order to abuse it to the disadvantage of the users, e.g. by unethical profiling.

Excursion on more general problems:
Unfortunately, issues like software growing worse over time are quite common, and unfortunately have long spread to the world of Unix-like systems too, including through software that is stuck on the desktop paradigm, software that no longer includes sensible command-line arguments, software that is written specifically for e.g. KDE or Gnome, software that relies (often for no practically worthwhile purpose) on D-Bus, and, above all, software written on the premise that the user is an idiot who should be prevented from doing what he wants with the software.

Forced use of certain softwares and OSes is by no means unheard of either. For instance, it is still common that a business has to own MS-Office licenses, because it receives, or is forced to send, MS-Office documents from/to other parties. For instance, there is a non-web version of the aforementioned Elster, but it runs* only on MS Windows, which would force any user to have a licence for that, have a computer running it, etc.

*Or, at least, ran, some years ago. I have not checked for changes, but I am understandably not optimistic.

Again, the world of software was, by and large, better ten years ago than it is today. And, again, there were things that the likes of Vim did right in the 1980s that virtually all newer software fails at in 2022—including something as basic as easily configurable key-mappings.

Written by michaeleriksson

November 28, 2022 at 2:48 pm

Overruled choice and WordPress (“p”, not “P”!)

leave a comment »

Today, I spent a few hours writing a long and complicated text ([1]). Before final polishing, I wanted to refresh myself with some music—and immediately ran into a case of overruled choice, also see [2]. I wrote a text on that, took a short break, did my polishing, published, took a coffee break—and then ran into yet another case of “overruled choice”:

It appears that WordPress (which, note again, I write with “p”—any “P” is the illicit manipulation of this user-hostile service), has manipulated at least some signs in the extend hyphen family. Specifically, it appears to (inconsistently) turn “-” into the HTML character reference for the n-dash, respectively the Unicode character 8211. This, however, is not what I asked for. If I want to add an n-dash, I* am perfectly capable of doing so—indeed, I have a sign for that in my private markup, used to generate the original and correct HTML (that WordPress (“p”!!!!!) later butchers). I entered a regular “-” (likely Unicode 002D; definitely the corresponding ASCII decimal 45) and I expect a regular “-” to appear. The replacement with an n-dash is particularly ill-advised as different dashes have different lengths and semantic implications, and this replacement made no sense in context. (In contrast, cf. below, a replacement with a minus sign, Unicode 2212, might have made sense, even if it remained an illicit manipulation.)

*And the default assumption should be the same for every other user. I use post-by-email, which implies the sending of a pre-formatted HTML document. Users making such documents from markup languages (like I) or by hand can be assumed to know what they are doing. Those who use an HTML editor have access to the editor’s capabilities to add various signs and whatnots as they see fit—and likely with greater capabilities and definitely with a higher degree of precision that through these illicit manipulations. In fact, there is some chance that the latter run into the complication that the HTML editor has some type of autocorrect going in one direction, which WordPress (“p”!!!!!) then tries to “correct” in another direction…

This manipulation is the worse for occurring in a typographically tricky situation, namely in algebraic expressions containing variables with names involving the “-” sign, e.g. “o- – m+”.* I had some doubts** as to how that would work, but it looked sufficiently OK in my local browser, using the generated correct HTML code. However, how can I trust my local impressions, when WordPress (“p”!!!!!) illicitly changes my express wishes? Or what if I publish the same HTML elsewhere, and some other interfering bunch of presumptuous incompetents decide to change this in some other manner, leading to inconsistent documents? Etc.

*Specifically, the “-” hanging on the “o” was left unchanged, while the “-” in between “o-” and “m+” was altered.

**These doubts were the reason that I explicitly checked how the rendering in WordPress turned out, as there might be differences depending on e.g. the font used—even absent illicit manipulations.

Of course, the question must be raised how many other manipulations WordPress (“p”!!!!!) performs that I am not yet aware of. (And the list is fairly long already, including the constant manipulation of “Wordpress”, mishandling of various quotation marks, spurious removal and insertion of empty lines, …) In this case, I noticed because I checked this specific rendering, in a specific place, for a specific reason—but this is not something that I usually do, and certainly not for entire documents. (No, I have not checked the entirety of [1] either, just that one area.) For instance, a current or future replacement of “fuck” (the f-word) with e.g. “f-word” (an “f” hyphenated to “word”) is definitely possible. I cannot even rule out, although I consider it extremely unlikely, that a spurious “Vote Biden!” or “Hitler is a hero!” has been inserted somewhere.

Note on references:
I have drawn on Wikipedia’s List of XML and HTML character entity references, as well as my local “man page” for the ASCII encoding, for various codes.

Disclaimer:
As this text discusses mistreatment of text by WordPress (“p”!!!!!) and is published through WordPress (“p”!!!!!), it is quite possible that what I try to express fails through exactly the type of illicit manipulation that I try to argue against.

Excursion on my markup language:
This markup language is by no means perfect, as I have not bothered to do everything doable, including that I have not added a way to encode the minus sign represented by Unicode 2212 or implemented a more generic math mode. So far, there has been little need, but I might have done so today, if my typographic fears had been realized. (Or I might have chosen to simply replace the “-” with the character reference for Unicode 2212—that I did not implies that WordPress (“p”!!!!!) should not have done so either, and it certainly should not have replaced it with the pointless and misleading n-dash.) Similarly, I have not yet added an “escape mode”, to prevent some piece of markup code from being interpreted as markup code and instead be inserted as the literal expression in the generated HTML file. This is OK—it is my decision, in a weighing of my time vs. the (small) benefit of the addition. That WordPress (“p”!!!!!) interferes is not OK, even if it is with some misguided notion that users are idiots whose texts must be arbitrarily reformatted, even at the risk that proficient users see their work sabotaged. (See [2] for more and a link to even further discussions.) Now, an addition that I will perform shortly is to find some workaround for the manipulation of specifically “Wordpress”, maybe by inserting an invisible character or a thin space somewhere to trick the replacement algorithm.

Written by michaeleriksson

November 10, 2022 at 10:55 pm

Overruled choice and firejail

with one comment

Another case of user hostile limitations and overruled choice ([1]):

I regularly use firejail, a sandbox tool, to reduce the risk of security breaches and programs (or downloaded contents viewed in programs) misbehaving.* Today, I was trying to listen to a downloaded music file, the filename of which contained a comma. I started it with my usual bash-script wrapping a firejailed mplayer—and was met with an “Error: “[filename]” is an invalid filename: rejected character: “,””.** Note how this makes no mention of what program or sub-functionality of that program caused/detected/whatnot the error, nor gives any true and valid reason—as the filename (cf. below) is perfectly valid.

*Note that, apart from bugs, there are many software makers who have radically different ideas as to what they are allowed to and should do than many users and/or specifically I. Consider e.g. “phone home” functionality.

**Where I have replaced the actual filename with “[filename]”. I caution that the original quotes might have been distorted by WordPress—as might “Wordpress”, which I write with a “p” not a “P”. Cf. [1].

After some experiments with the ls* command, I could conclude that (a) my file system has no objections to the filename, (b) ls has no objections to the filename, (c) even firejail, it self, has no objections to calling ls with the filename,** but that (d) firejail objects to using this filename in a “whitelist”***. Firstly, this is an extremely disputable decision, as there are hardly ever legitimate reasons to artificially restrict users (and, apart from the above, I cannot recall ever having problems with commata in filenames in any other context). Secondly, the error message is absolutely and utterly inexcusable—an acceptable error message might have read “firejail: “[filename]” is not a valid name for whitelisting: rejected character: “,””. I note, in particular, that (a) it is never acceptable to assume that the user calls a certain program directly and will automatically know what program is to blame, (b) firejail, by its nature, is always used in combination with some other program and correspondingly must make clear whether a certain error comes (directly or indirectly) from firejail or from the other program, (c) as filenames can enter firejail by different roads, it must make clear what road is affected.

*This command just lists file information, so a potentially hostile music file cannot really do any damage even absent firejail.

**But this might simply be a result of firejail being agnostic of the nature of the arguments sent to the “real” program, in this case ls resp. mplayer.

***A means to tell firejail what files may be accessed. An opposite blacklist mechanism tells firejail what files may not be accessed. Note that with mplayer, which per default underlies stricter rules, whitelisting is necessary, while it is optional with ls. Indeed, this points to a possible further point of criticism: it is not a given that a tool will fail just because a certain file cannot be whitelisted, and it would, then, be better for firejail to merely print a warning message and continue with execution. Why should a “firejail ls [filename]” work, while a “firejail - -whitelist=[filename] ls [filename]” leads to an error message? Similarly, why should “firejail - -whitelist=[filename] ls [completely different filename]” lead to an error message?

Generally, firejail has proven quite problematic in terms of arbitrary (or arbitrary seeming, cf. excursion) restrictions, poor usability, and whatnot. For instance, a natural use case is to whitelist the home directory of the current user (while preventing a number of other accesses, including to the Internet, to public directories, and maybe a sub-directory or specific files in the home directory). But this results in “Error: invalid whitelist path /home/[username]”. This is an intolerable restriction, as it should be up to the user and the user only what decisions he allows here. Specifically, the result with my script to play music (and another to view movies) is that I must put my files in a sub-directory of the home directory, which is a nuisance.* Moreover, the error message is, again, very weak. (Yes, this time the issue of whitelisting is mentioned, but neither that firejail is the culprit, nor the exact issue, viz. why this was not allowed.)

*To this note (a) that I reject the idiocy of pseudo-standard directories like “Movies” and “Music” in general and in principle, as a bad idea, (b) that these would be entirely redundant in my case, as I use separate users for this division (just like I have separate users for e.g. surfing, writing, and business activities). Indeed, while whitelisting the home directory, a restriction that prevents the automatic creation of such directories by user-hostile tools would be an obvious use case.

For instance, I had once shuffled off some user files to a directory /d2,* and later tried to access one of the files using firejail. The result? “Error: invalid whitelist path”. As research showed, there are only a limited set of directories below the root that firejail allows, and (the created by me) directory /d2 was not in this limited set. Very similar criticism as with home directories apply. To my recollection, making matters worse, this set of directories was hard-coded, where it should have been configurable, even be it on the system level (instead of the user level).

*I have my “root” and “home” directory trees on separate partitions and the latter happened to be full. This shuffle was a very temporary workaround.

Another issue is the mixture of whitelisting and blacklisting that is used, which is both inconsistent and can lead to odd effects. It would be better to, for most tools, simply consider everything blacklisted and then to whitelist exactly what the tool legitimately needs. (The aforementioned ls is an exception, where the reverse approach seems natural, at least with regard to files, but not, of course, rights like network access.) In all fairness, there is room to discuss the degree to which the firejail developers are to blame and to what degree the distributions that contain firejail.*

*Which have a considerable influence through delivery of configuration files. For instance, until a little less than a year ago, I was a Debian user, and Debian had a very lax attitude, which made use more comfortable, but also reduced the benefit of using firejail. Gentoo, my current distribution, takes a much more stringent attitude. While I prefer this stringency in the long term, it did make the switch from Debian to Gentoo unnecessarily painful. As to Gentoo, there is a lot of incompetence too, in that all user changes are supposed to go in “.local” files that are included during reading of the main “.profile” file for the program at hand. (For instance, there is a file mplayer.profile, which is read, and a file mplayer.local, which is merely included—but mplayer.profile does things that I cannot, or not trivially, undo through mplayer.local, which is contrary to my right to configure my system as I see fit.)

Excursion on possible justifications:
At least with /d2 (cf. above), I could imagine a justifying scenario relating to firejail being a SUID* program, that there might be some way for a user to gain illicit rights if whitelisting directories directly under the root directory (i.e. /). This is speculation, however, and it should have been stated much more clearly, if this was actually the case. It does not justify the home directory issue, even if true, and I cannot see how it would justify the comma issue—if in doubt, firejail should perform better sanitation, not throw errors.

*A program that starts with other rights than the user who calls it and then (if programmed correctly!) drops any additional rights as soon as possible.

Written by michaeleriksson

November 10, 2022 at 7:42 pm

The tax filings / Follow-up: Depressing software issues and the yearly tax filings

leave a comment »

A few days ago, I wrote of depressing software issues ([1]) preceding the yearly tax filings. Now I have completed the actual tax filings.

For obvious reasons (minimal activity in 2021), it was the least effort that I have had in many years, taking roughly twenty minutes for the actual filling-out-the-forms, including checking and re-checking, and maybe another five minutes for finding the few numbers and papers needed. (But not counting the software complications described in [1] and the need to buy batteries for my mouse.)

However, that filling-out-the-forms could easily have been done in a quarter of the time, had Elster worked better, including having a more thought-through workflow and a more sensible set of fields carried over when importing data from last year’s forms. A particular problem was with the EÜR*: a number of fields were marked as empty-but-mandatory, forcing me to enter redundant 0s in some Euro-fields. One of the fields (net profit?) was mandatory and not automatically calculated (or manually editable) until I had redundantly added a dummy 0 in a non-mandatory field, which took a while to figure out. The empty-but-mandatory fields also included four fields for the Steuernummer**, which occurred twice—once filled in, once empty.*** There was no import of the second occurrence from the other, nor from last year, and I was forced to look up the values externally (as Elster does not allow having multiple pages open in parallel). Worse, one of the fields, to identify my local IRS by name, was likely redundant, as the equivalent information is hard-wired into the first of three numerical components of the Steuernummer proper. As the (overall) numerical component was artificially split into three parts, a single copy-and-paste was impossible, increasing the work and the risk for errors even further.

*A statement of various revenues and costs that allows a simplified calculation of taxable profits for small businesses.

**An entity identifier used for tax purposes. I have written about it and several related issues, including the artificial tri-partition of the field(s), in the past, but I lack the energy to search for links at the moment. (I have many previous texts on Elster and the IRS, and to find the right text or texts could take a while.)

***Presumably, two different values can occur for the same EÜR, for one reason or another. In my case, they have always been the same, and, at a minimum, the value from last time around should have been respected for the version idiotically kept empty.

By the time that I was done with the EÜR, I felt the pressure of irritation reach the border of anger, as I have been plagued by so many other problems with the incompetence and tax-payer/user hostility of Elster and the IRS over the years, and I seriously contemplated leaving the rest for tomorrow. However, I pushed on and, for once, the remainder went almost without problems: I had merely to enter a few trivial values and check that no spurious fields from 2020 were left filled.

However, these trivial values included, in the main document, values from two secondary documents (the EÜR and the VAT declaration) that should have been imported automatically to avoid the additional work and the additional risk of errors, as well as numbers from my health insurance, which the IRS will, as a matter of course, ignore in favor of the exact same values delivered directly from the insurer to the IRS. Idiotic. (And something which remains idiotic—I have complained about these issues before. Indeed, most or all of the old problems seem to remain, but they had less effect this year, due to my much easier situation, filing-wise, and my knowledge of what to expect—like when one has developed the knack of opening or closing that tricky hatch, gate, whatnot, which is so troublesome for the first-time user. Consider e.g. the severely misleading labels for various actions in Elster—knowing what they actually do, as opposed to what they claim to do, makes life easier.)

Written by michaeleriksson

October 28, 2022 at 2:22 pm

Posted in Uncategorized

Tagged with , , , ,

Some thoughts on software and animation

leave a comment »

During my recent Firefox adventures ([1]), I stumbled upon an old (2009) page ill-advisedly suggesting a project to add animations to Firefox.

Considering the horror that animation is, it is disturbing with what seemingly sane and insightful attitude the project set out:

Animation in the browser is a tool, but not a goal unto itself. Wherever animation is used, it should be with a purpose and benefit to the user.

Like many web technologies, animation is a useful but easily abused tool. The early web and the dawn of the .gif format saw animation heinously overused websites, with blinking, spinning, and scrolling animations thrown in because they “looked cool.” As the web stopped foaming at the mouth and begin the transition to what could be done to what should be done, animation became used more successfully as a tool. Some ways in which animation can be useful include:

o Making browsing feel faster
o Adding visual affordances to makes tasks more understandable
o Making the browser and tasks more visually appealing

To bring animation to Firefox, we decided to first focus on three key areas that we felt would give users the most benefit by adding animation. Out of many possibilities, we looked for places where animation would make interactions feel faster and help users perform tasks.

The first paragraph is dead on—but exactly this is where seemingly every modern software and every modern website that uses animations fails. This includes Firefox. If the developers had actually lived this paragraph, Firefox would have had no or only minimal animations today. (I cannot quite shake the suspicion that this was an alibi-paragraph, so that the developers could establish “common sense on animation” credibility before going on to actually display a lack of common sense.)

The second is half right, as it describes a horror of old, but it then mistakenly assumes that things would be significantly better in the page’s now (i.e. 2009). This was not the case: Animation then, just as before, and just as in my now (i.e. 2022) was/is excessive and usually done more for the purpose of having animations than anything else.

(The list is discussed below.)

The final paragraph points to three areas where animations in Firefox would be an alleged good. As can be deduced from the rest of the page, these are “Tab tearoff”, “Text search on page”, “Movement of toolbar items within rows (UI elements, bookmarks, tabs)”. None of these, however, have added value as implemented. On the contrary, they are among exactly the type of things not to animate, because the result is annoying and distracting, often delays the action, and adds no value.

Looking at the list:

o Making browsing feel faster

In the case of e.g. a progress indicator, an hour glass, or similar, this might work to the degree that the user sees that the browser (more generally, application) is still working. Other than that, I have never seen a positive effect. On the contrary, I have often seen cases where the application has been made objectively slower by the introduction of animations, because continued work is not possible until the animation is ended. One example is the CTRL-F issue discussed in [1]. The maybe paramount example, and one of my own first major contacts with animation, was the slow-as-molasses menus of Windows 2000 and/or Windows XP.* This was at a time when gaining a usable command line in Windows was virtually impossible and programs had to be started by clicking through multi-level menus. I often “knew the way” and could have reached my goal with a reasonably rapid click-click-click-click. Instead, I had to click on the main menu, wait for an animated menu to slowly unfold, click on the right sub-menu, wait for an animated menu to slowly unfold, click on the next sub-menu, wait for an animated menu to slowly unfold, and then click on the finally visible program.

*This was long ago and I am vague on the details. I do remember that I soon found some type of setting to disable this shit—but this anti-feature should have been off per default or entirely non-existent to begin with. (As I repeatedly noted in those old days of heavy Windows use: if Windows has a toggable feature, the default value will be poorly chosen in two-thirds of the cases. This while a coin-toss would have been at just half the cases.)

o Adding visual affordances to makes tasks more understandable

(An “affordance” is “Any interactive control or component serving as a cue to the user to take some action.” according to Wiktionary. I have no recollection of hearing the term before yesterday.)

There might be some limited room for this, but not much, certainly none that applies to what I have seen in Firefox or what was suggested in the final paragraph of the initial quote, and I can think of few situations where non-animated hints would not be better, if in doubt due to the annoyance factor. Take e.g. a field to input a mandatory text combined with a save button. In a non-animated case, the button might be greyed out as long as the field is empty, and the field carry a note like “mandatory field”. In an animated case, we might end up with an animated paper-clip bouncing around the screen, with a speech-bubble “You must enter text in this field!!!”. (Or, in a less extreme example, there might be a big flashing arrow pointing to the text field.)

However, I suspect that a faulty application of this idea explains the CTRL-F issue: Some nitwit assumed that, without the animation, too many users would be permanently confused as to what to do after pressing CTRL-F, while the animation would provide them the insight that “Hey! There is a search field!”. In reality, this would apply only to a small minority of highly unskilled computer users,* who additionally are too unobservant to spot the fact that a search field has just appeared (as opposed to being slowly blended in through an animation), and would, even for them, likely only be relevant the first few times. Correspondingly, the benefit is minimal. The delay and the annoyance, on the other hand, hits everyone for the duration. Even if an animation were beneficial, this is a poor way to do it. A better way would be to just show the field and have the already present field flash briefly. The annoyance from the animation, per se, remains, but work can begin at once and the annoyance from the delay through the animation is gone.

*Effectively, someone who has minimal experiences with virtually any computer application, including other browsers, and simultaneously has minimal experiences with Windows/Mac without being a sufficiently proficient user to have moved to a more adult OS, like a typical Linux distribution. Of course, someone like that might be unlikely to try CTRL-F to begin with…

For a highly proficient user, however, any animation in this case is likely to be harmful as he is likely to (otherwise) just tip in the search phrase immediately after CTRL-F (resp. / or ? in Vim, resp. whatever keys the application at hand requires), without looking for a search field. Even without a delay, the animation can be problematic, as it screams “Look at me!” and might cause an artificial interruption as the user does exactly that. With a delay, depending on exactly how the delay is implemented, it might well be that the user is now tipping in vain, as the keystrokes are not registered by the search…

o Making the browser and tasks more visually appealing

I have no recollection of seeing this done successfully, anywhere, at any time, in any product with “everyday animations”.* On the contrary, this comes very close to using animations as “a goal unto itself”. When it has worked, it has been with more spectacular “major effect animations”,** as with the classic bouncing-card animation after solving a solitaire in Windows. However, even these grow old fast, and they are certainly not to recommended for frequent use in an everyday tool like a browser.

*Here I find myself lacking in terminology, but consider e.g. the CTRL-F animation or a tab-movement animation.

**Again, I am lacking in terminology, but the example given is likely to be explanation enough.

For my part, I used to work with the following informal rules (in the days that I had to implement occasional GUI-functionality):

  1. Only add animations when they bring a tangible benefit.
  2. If you believe that an animation will bring a tangible benefit, you are wrong in nine cases out of ten.

    (Where “benefit” is to be seen over the entire user base—not just the first one or two uses of a newbie. Note in particular the potential damage through delays and annoyances, as mentioned above.)

  3. If in doubt, do not animate.

These rules served me so well that I cannot recall ever adding an animation (although I probably have—if in doubt because some product manager or whatnot was more naive and insisted).

If giving rules for someone else (which I implicitly am), I might add e.g.:

  1. The main effect of animation, whether intended or unintended, is to call attention to something, with possible side-effects including interrupted work-flows, interrupted thoughts, attention diverted from where it really belongs, etc. Therefore, be very certain that you actually do want to call attention to whatever is animated.

    Corollary 1: Never have more than one animation in the same page at the same time.

    Corollary 2: Keep animations short. Once the purpose of getting attention can reasonably be assumed to have been reached, the animation must be stopped so that work can continue without distraction.

  2. Beware the annoyance factor, especially during prolonged use. Remember that there might be some who use your product for hours every day.

    (See the earlier discussion for more detail.)

  3. Keep the different proficiencies of different users in mind, and that the more proficient are more likely to be intense users and/or that intense users are more likely to become proficient. Do not tailor your application to your grand-mother. (Unless, of course, the intended target demographics is old ladies.)

    More generally, a good application might well make allowances for weak[er] users, but not in a manner that hinders strong[er] users. For instance, looking back at [1], making it trivial to connect the TorBrowser to Tor is good, but making it hard to by-pass Tor is bad. For instance, reasoning that “we do not need any keyboard short-cuts, because everything can be done by mouse” is hopelessly narrow-minded. For instance, to return to Firefox/TorBrowser, providing many ready-made keyboard short-cuts is good; making them near impossible to change is bad. An attitude of “A user should not need expert knowledge to use our application.” is laudable; an attitude of “If a user does have expert knowledge, we must prevent him from using it.” is despicable.

  4. Any and all animations, without exception, must have an easy-to-find* switch to turn them off. In most cases, the default value should be “off”.**

    *The obscure, well-hidden, poorly documented, and often functionless settings in Firefox’s about:config are a negative example.

    **A problem with this rule is that many naive decision makers will reason that “The users would LOVE the animations, if they knew about them! If the animations are off, they will never find out; ergo, animations must be on!”. The premise of “LOVE”, however, is very dubious. As a compromise, an application might provide a “first use” dialogue where a few meta-decisions can be made, e.g. whether to have all animations “on” or “off” until a more fine-grained decision is made. (Similar meta-decisions might include whether to allow “phone home” functionality, whether (on Linux) to prefer Vi- or Emacs-style key-bindings, and similar.)

  5. Clippy is the devil.

Clarification of terminology:
Note that I do not consider any and all change of a display to be an animation. For instance, if a menu immediately goes from a folded to an unfolded stage and then remains static until the next user action, this is a change in the display, but it is not an animation. Ditto a search window that immediately appears and then remains static. Ditto a link which immediately changes looks when focused or unfocused and then remains static. Ditto e.g. a mouse cursor that moves from point A to point B as the result of a continuous user action. In contrast, the Windows folders discussed above suffered from an animation. Ditto CTRL-F in [1]. An hour glass that turns for two minutes while the program is working is also an example of animation, but one much more legitimate.

Written by michaeleriksson

October 26, 2022 at 1:07 pm

Posted in Uncategorized

Tagged with , , , ,