Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘software

Follow-up: On Firefox and its decline

leave a comment »

Since my post on the decline of Firefox, the developers have released another “great” feature, supposed to solve the speed problem compared to Chrome and other competitors: Electrolysis* (aka. e10s).

*I have no idea how they came up with this misleading name. Possibly, they picked a word at random in a dictionary?

This feature adds considerable multi-threading capability and detaches the GUI from the back-end of the browser, thereby on paper making the browser faster and/or hiding the lags that do occur from the user.

In reality? In one browser installation* (shortly after the feature being activated) I had to disable this feature, because it caused random and unpredictable tab failures several times a day, forcing me to “restart” (I believe the chosen word was) the tab in order to view it again. Even the tabs that did not need to be restarted only displayed again with a lag every time another tab had failed. The net effect was not only to make the browser more error prone, but also to make it slower (on average).

*I have several Firefox (more specifically Tor Browser) installations for different user accounts and with different user settings, including e.g. separate installations for business purposes, private surfing, and my WordPress account. This to reduce both the risk of a security breach and the effects of a breach, should one still occur. As for why the other installations were not affected, this is likely due to the roll-out manner used by Firefox of just activating a feature in existing installations, based on an installation dependent schedule, instead of waiting for the next upgrade. Presumably, all the other installations had received upgrades before being hit by the roll-out. (This approach is both ethically dubious and a poor software practice, because it removes control from the users, even to the point of risking his ability to continue working. What if something goes so wrong that a down-grade or re-install is needed—with no working browser installed? This is very bad for the private user; in a business setting, it could spell disaster.)

Today, I had to deactivate it in another installation: After opening and closing a greater number of tabs, Firefox grew more and more sluggish, often only displaying a page several seconds after I had entered the tab, or showing half a page and then waiting for possibly 5–10 seconds before displaying the rest. This for the third time in possibly a week after my latest upgrade. (I would speculate on some type of memory leak or other problem with poor resource clean up.)

I note that I have never really had a performance problem with Firefox (be it with pure Firefox or the Tor Browser*) before this supposed performance enhancer, possibly because I use few plug-ins and have various forms of active content (including Flash and JavaScript) deactivated per default—as anyone with common sense should. This makes the feature the more dubious, because it has (for natural reasons) taken a very large bite out of the available developer resources—resources that could have been used for something more valuable, e.g. making it possible for plugins like “Classic Theme Restorer” to survive the upcoming XUL removal.

*Not counting the delays that are incurred through the use of Tor. I note that Tor is a component external to the Tor Browser, and that these delays are unrelated to the browser used.

Unfortunately, the supposedly helpful page “about:performance”, which was claimed to show information on tabs and what might be slowing the tabs down, proved entirely useless: The only two tabs for which information was ever displayed were “about:config” and “about:performance” it self…

Oh, and apparently Electrolysis is another plugin killer: The plugin makers have to put in an otherwise unnecessary effort in order to make their plugins compatible, or the plugins will grow useless. Not everyone is keen on doing this, and I wish to recall (from my research around the time of the first round of problems) that some plugins face sufficiently large obstacles that they will be discontinued… (Even the whole XUL thing aside.)

Now, it might well be that Electrolysis will prove to have a net benefit in the long term; however, we are obviously not there yet and it is obvious that the release(s) to a non-alpha/-beta tester setting has been premature.

Advertisements

Written by michaeleriksson

November 6, 2017 at 11:02 pm

The success of bad IT ideas

leave a comment »

I have long been troubled by the many bad ideas that are hyped and/or successful in the extended IT world. This includes things that simply do not make sense, things that are inferior to already existing alternatives, and things that are good for one party but pushed as good for another, …

For instance, I just read an article about Apple and how it is pushing for a new type of biometric identification, “Face ID”—following its existing “Touch ID” and a number of other efforts by various companies. The only positive thing to say about biometric identification is that it is convenient. It is, however, not secure and relying* on it for anything that needs to be kept secure** is extremely foolish; pushing such technologies while misrepresenting the risks is utterly despicable. The main problem with biometric identification is this: Once cracked, the user is permanently screwed. If a password is cracked, he can simply change the password***; if his face is “cracked”****, he can have plastic surgery. Depending on the exact details, some form of hardware or software upgrade might provide a partial remedy, but this brings us to another problem:

*There is, however, nothing wrong with using biometric identification in addition to e.g. a password or a dongle: If someone has the right face and knows the password, he is granted access. No means of authorization is fool proof and combining several can reduce the risks. (Even a long, perfectly random password using a large alphabet could be child’s play if an attacker has the opportunity to install a hidden camera with a good view of the users keyboard.)

**Exactly what type of data and what abilities require what security will depend on the people involved and the details of the data. Business related data should almost always be kept secure, but some of it might e.g. be publicly available through other channels. Private photos are normally not a big deal, but what about those very private photos from the significant other? Or look at what Wikipedia says about Face ID: “It allows users to unlock Apple devices, make purchases in the various Apple digital media stores (the iTunes Store, the App Store, and the iBooks Store), and authenticate Apple Pay online or in apps.” The first might or might not be OK (depending on data present, etc.), the second is not, and the third even less so.

***Depending on what was protected and what abilities came with the password, this might be enough entirely or there might be need for some additional steps, e.g. a reinstall.

****Unlike with passwords, this is not necessarily a case of finding out some piece of hidden information. It can also amount to putting together non-secret pieces of information in such a manner that the biometric identification is fooled. For instance, a face scanner that uses only superficial facial features could be fooled by taking a few photos of the intended victim, using them to re-create the victim’s face on a three-dimensional mask, and then presenting this mask to the scanner. Since its hard to keep a face secret, this scenario amounts to a race between scanner maker and cracker—which the cracker wins by merely having the lead at some point of the race, while the scanner maker must lead every step of the way.

False positives vs. false negatives. It is very hard to reduce false positives without increasing false negatives. For instance, long ago, I read an article about how primitive finger-print* checkers were being extended to not just check the finger print per se but also to check for body temperature: A cold imprint of the finger would no longer work (removed false positive), while a cut-off finger would soon grow useless. However, what happens when the actual owner of the finger comes in from a walk in the cold? Here there is a major risk for a false negative (i.e. an unjustified denial of access). Or what happens if a user of Face ID has a broken nose**? Has to wear bandages until facial burns heal? Is he supposed to wait until his face is back to normal again, before he can access his data, devices, whatnot?

*These morons should watch more TV. If they had, they would have known how idiotic a mere print check is, and how easy it is for a knowledgeable opponent (say the NSA) to by-pass it. Do not expect whatever your lap-top or smart-phone uses to be much more advanced than this. More sophisticated checks require more sophisticated technology, and usually comes with an increase in one or all of cost, space, and weight.

**I am not familiar with the details of Face ID and I cannot guarantee that it will be thrown specifically by a broken nose. The general principle still holds.

Then there is the question of circumvention through abuse of the user: A hostile (say, a robber or a law enforcement agency) could just put the user’s thumb, eye ball, face, whatnot on the detector through use of force. With a password, he might be cowed into surrendering it, but he has the option to refuse even a threat of death, should the data be sufficiently important (say, nuclear launch codes). In the case of law enforcement, I wish to recall, but could be wrong, that not giving out a password is protected by the Fifth Amendment in the U.S., while no such protection is afforded to a finger prints used for unlocking smart-phones.

Another example of a (mostly) idiotic technology is various variations of “cloud”*/** services (as noted recently): This is good for the maker of the cloud service, who now has a greater control of the users’ data and access, has a considerable “lock in” effect, can forget about problems with client-side updates and out-of-date clients, … For the users? Not so much. (Although it can be acceptable for casual, private use—not enterprise/business use, however.) Consider, e.g., an Office-like cloud application intended to replace MS Office. Among the problems in a comparison, we have***:

*Here I speak of third-party clouds. If an enterprise sets up its own cloud structures and proceeds with sufficient care, including e.g. ensuring that own servers are used and that access is per Intranet/VPN (not Internet), we have a different situation.

**The word “cloud” it self is extremely problematic, usually poorly defined, inconsistently used, or even used as a slap-on endorsement to add “coolness” to an existing service. (Sometimes being all-inclusive of anything in the Internet to the point of making it meaningless: If I have a virtual server, I have a virtual server. Why would I blabber about cloud-this and cloud-that? If I access my bank account online, why should I want to speak of “cloud”?) Different takes might be possible based on what exact meaning is intended resp. what sub-aspect is discussed (SOA interactions between different non-interactive applications, e.g.). While I will not attempt an ad hoc definition for this post, I consider the discussion compatible with the typical “buzz word” use, especially in a user-centric setting. (And I base the below on a very specific example.)

***With some reservations for the exact implementation and interface; I assume access/editing per browser below.

  1. There are new potential security holes, including the risk of a man-in-the-middle attack and of various security weaknesses in and around the cloud tool (be they technical, organizational, “social”, whatnot). The latter is critical, because the user is forced to trust the service provider and because the probability of an attack is far greater than for a locally installed piece of software.
  2. If any encryption is provided, it will be controlled by the service provider, thereby both limiting the user and giving the service provider opportunities for abuse. (Note e.g. that many web-based email services have admitted to or been caught at making grossly unethical evaluations of private emails.) If an extra layer of encryption can at all be provided by the user, this will involve more effort. Obviously, with non-local data, the need for encryption is much higher than for local data.
  3. If the Internet is not accessible, neither is the data.
  4. If the service provider is gone (e.g. through service termination), so is the data.
  5. If the user wishes to switch provider/tool/whatnot, he is much worse off than with local data. In a worst case scenario, there is neither a possibility to down-load the data in a suitable form, nor any stand-alone tools that can read them. In a best case scenario, he is subjected to unnecessary efforts.
  6. What about back-ups? The service provider might or might not provide them, but this will be outside the control of the user. At best, he has a button somewhere with “Backup now!”, or the possibility to download data for an own back-up (but then, does he also have the ability to restore from that data?). Customizable backup means will not be available and if the service provider does something wrong, he is screwed.
  7. What about version control? Notably, if I have a Git/SVN/Perforce/… repository for everything else I do, I would like my documents there, not in some other tool by the service provider—if one is available at all.
  8. What about sharing data or collaborating? Either I will need yet another account (if the service provider supports this at all) for every team member or I will sloppily have to work with a common account.

To boot, web-based services usually come with restrictions on what browsers, browser versions, and browser settings are supported, forcing additional compromises on the users.

Yet another example is Bitcoin: A Bitcoin has value simply for the fact that some irrational people feel that it should have value and are willing to accept it as tender. When that irrationality wears off, they all become valueless. Ditto if Bitcoin is supplanted by another variation on the same theme that proves more popular.

In contrast, fiat money (e.g. the Euro or the modern USD) has value because the respective government enforces it: Merchants, e.g., are legally obliged, with only minor restrictions, to accept the local fiat money. On the outside, a merchant can disagree about how much e.g. a Euro should be worth in terms of whatever he is selling, and raise his prices—but if he does so by too much, a lack of customers will ruin him.

Similarly, older currencies that were on the gold (silver, whatnot) standard, or were actually made of a suitable metal, had a value independent of themselves and did not need an external enforcer or any type of convention. True, if everyone had suddenly agreed that gold was severely over-valued (compared to e.g. bread), the value of a gold-standard USD would have tanked correspondingly. However, gold* is something real, it has practical uses, and it has proved enduringly popular—we might disagree about how much gold is worth, but it indisputably is worth something. A Bitcoin is just proof that someone, somewhere has performed a calculation that has no other practical use than to create Bitcoins…

*Of course, it does not have to be gold. Barring practical reasons, it could equally be sand or bread. The money-issuing bank guarantees to at any time give out ten pounds of sand or a loaf of bread for a dollar—and we have the sand resp. bread standard. (The gold standard almost certainly arose due to the importance of gold coins and/or the wish to match non-gold coins/bills to the gold coins of old. The original use of gold as a physical material was simply its consistently high valuation for a comparably small quantity.)

As an aside, the above are all ideas that are objectively bad, no matter how they are twisted and turned. This is not to be confused with some other things I like to complain about, e.g. the idiocy of various social media of the “Please look at my pictures from last night’s party!” or “Please pay attention to me while I do something we all do every day!” type. No matter how hard it is for me to understand why there is a market for such services, it is quite clear that the market is there and that it is not artificially created. Catering to that market is legitimate. In contrast, in as far as the above hypes have a market, it is mostly through people being duped.

(However, if we are more specific, I would e.g. condemn Facebook as an attempt to create a limiting and proprietary Internet-within-the-Internet, and as having an abusive agenda. A more independent build-your-own-website kit, possibly in combination with RSS or an external notification or aggregation service following a standardized protocol would be a much better way to satisfy the market from a user, societal, and technological point of view.)

Written by michaeleriksson

September 18, 2017 at 11:37 pm

On Firefox and its decline

with one comment

I recently encountered a blog post by a former Firefox insider discussing its declining market share.

When it comes to the important question “why?”, he offers that “Google is aggressively using its monopoly position in Internet services such as Google Mail, Google Calendar and YouTube to advertise Chrome.”—which cannot be more than a part of the truth.

If it were the entire truth, this would mostly show in new or inexperienced users going to Chrome instead of Firefox, those that have not yet grown accustomed to a particular browser.

Then why is there a drop among the long-term users? Those who have used Firefox for years? Those who (like me) first used the Firefox grandfather Mosaic well over twenty years ago and then graduated to its father, Netscape?

Things like that happen either because the competition grows better (or better faster) or because the own product grows worse. Indeed, this is what I have repeatedly experienced as a user: After Netscape, I switched to Opera for a number of years, because Opera actually was a better browser, especially with its tabs. Year for year, Opera failed to add new useful features and tried to force-feed the users poorly thought-through ideas that some manager or developer out of touch with his users saw as revolutionary. Eventually, I gave up and moved over to Firefox, which at the time did a reasonable job and had over-taken Opera—not because of its own qualities, but because Opera declined.

Unfortunately, Firefox has gone down the same destructive path as Opera followed, has grown worse and worse, and the only reason that I am still with Firefox is that I use the “Tor Browser Bundle”, which is based on Firefox and recommended as the safest way to use Tor by the Tor developers.

To list all that is wrong with Firefox and its course would take far too long—and would require digging through many years* of memories of “for fuck’s sake”–memories.

*I am uncertain how long I have been using Firefox by now. In a rough guesstimate, the Opera-to-Firefox switch might have occurred some ten years ago.

However, to list some of the most important (often over-lapping) issues:

  1. The removal of preferences that should be standard, e.g. the ability to turn images and JavaScript on and off. If these remain at all, they are pushed into the infamous, poorly documented, and unreliable “about:config”—the use of which is strongly discouraged by Firefox.

    When such preferences are removed (respectively moved to “about:config”) the handling can be utterly absurd. Notably, when the setting for showing/not showing images in web pages was removed, the Firefox developers chose to defy the stated will of the user by resetting the internal setting in about:config to the default value…

    To boot, config switches that are in “about:config” often stop working after some time, merely being kept to prevent scripts from breaking, but no longer having any practical function. Among the side-effects is that someone finds a solution for a problem on the Internet, alters the configuration accordingly—and has to spend half-an-hour researching why things still do not work as intended. (The reason being that the solution was presented for an earlier version of Firefox and Firefox failed to make clear that this solution was no longer supported.)

  2. Forcing users to download add-ons to handle tasks that a good browser should have in its core functionality, while adding nice-to-haves appropriate for an add-on to the official interface… (The “sync” bullshit is a good example.) Worse: Not all add-ons are compatible with each other (or with every Firefox) version, making this road unnecessary problematic, with results including even browser crashes. To boot, any additional add-on increases the risk of a hackable vulnerability, data being leaked to a hostile third-party, or similar.
  3. Failing to add functionality that would be helpful, e.g. a possibility to disable the design atrocity that is “position:fixed” or a user-friendly mechanism for mapping keys.
  4. One truly great (and expectedly oldish) feature of Firefox is the ability to save tabs and windows when exiting or the browser crashes and have them restored on the next start. This especially since Firefox crashes more than most other applications.

    Unfortunately, the configuration of this feature is a bitch (and probably disabled by default). There are at least two (likely more; it has been a while since I dealt with this the last time) flags that have to have the right value for this to work—one of which should rightly be entirely independent*. The names of these settings in about:config and the description in the GUI are non-obvious, more-or-less forcing a user to search the web for information—if he is aware that the feature exists in the first place. And: In several releases this feature has been so bug ridden that no combination of settings has worked…

    *The one appears to control the feature; the other controls whether a warning is issued when a user tries to close more than one tab at a time. When the latter is disabled, which is very reasonable even for someone who uses the former, the former is ignored…

    Worse, without this functionality a simple “CTRL-q” just quits the browser—no confirmation, no tabs saved. For a power surfer who regularly has dozens of tabs open at the same time, this is a major issue. This is the worse since someone heavy on tabs is almost certainly a frequent user of “CTRL-w”* and there is no good native way to change key bindings—amateurish!

    *I.e. “close the current tab”. Note that “w” is next to “q” on a standard QWERTY-keyboard, making the likelihood of occasional accidents quite high.

  5. The config management is lousy.

    For instance, Firefox started with the Windows style concept of “one user; one configuration” and never added provisions to e.g. specify config files on the command line. Among the negative side-effects is the later need to invent the redundant and poorly implemented concept of a “profile”—confusing, user-unfriendly, and bloating the code.

    For instance, “about:config” provides many, many options of the type normally found in a config file, that could have been edited with a text-editor much more comfortably than over the about:config interface. However, this opportunity was not taken and the users are stuck with about:config. Actually, there are some type of files, but these are absurd in comparison with those used by most Linux applications—and it is very, very clear that users are supposed not to edit them. (Statements like “Do not edit this file.” feature prominently.) For example, Firefox uses user_pref(“ui.caretBlinkTime”, 0); where any reasonable tool would use ui.caretBlinkTime=0.

    For instance, there is so much secrecy about and inconsistency in the configuration that the standard way to change an apparently simple setting is to install an add-on… (Also cf. above.) Where a user of a more sensible application might be told “add x=y to your config file”, the Firefox user is told to “install add-on abc”…

    For instance, copying the configuration from one user to another fails miserably (barring subsequent improvements), because it contains hard-coded paths referring to the original user.

    For instance, it used to be the case that a Firefox crash deleted the configuration, forcing the user to start over… (This was actually something that kept me with Opera for a year or so after I was already thoroughly feed up with it.)

  6. The support for multi-user installations, the standard for Linux and many corporate Windows installations, is weak and/or poorly documented. The results include e.g. that all users who wants to use popular add-ons have to install them individually—and keep them up-to-date individually.

    (Disclaimer: I looked into this on several occasions years ago. The situation might have been improved.)

  7. There are a number of phone-home and phone-third-party mechanisms that bring very little value, but often pose a danger, e.g. through reducing anonymity. This includes sending data to Google, which I would consider outright negligent in light of Google’s position and how it has developed over the years.
  8. The recent, utterly idiotic decision to drop Alsa support in favour of Pulse on Linux. This decision is so idiotic that I actually started to write a post on that topic alone when I heard of it. Most of what I did write is included as an excursion below. (Beware that result is not a full analysis.)
  9. The address bar started of very promisingly, e.g. with the addition of search keywords*. Unfortunately, it has so many problems by now that it does a worse job than most other browsers—and it grows worse over time. The preferred Firefox terminology “awesomebar” borders on an insult.

    *For instance, I have defined a keyword so that when I enter “w [something]”, a Wikipedia search for “[something]” is started. “ws [something]” does the same for the Swedish version of Wikipedia; “wd [something]” for the German. (I have a number of other keywords.)

    Among the problems: If a page is loading slowly and I re-focus the address bar and hit return again, the obvious action to take is to make a new attempt to load this page—it does not: It reloads the previous page! The history suggestions arbitrarily excludes all “about:” entries and all keyword searches—if I search with “w [something]” and want to switch to “g [something]”*, I have to retype everything. Per default, for some time, the history functionality is weakened through not listing the potential matches directly, but preceding them with annoying and useless suggestions to “visit” or “search” that only delay the navigation and confuse the users. Moreover, while there used to be working config flags to disable this idiocy, there are now just config flags (that do not work)…

    *Used to mean “search with Google” a long, long time ago; hence the “g”. Currently, I use duckduckgo.

  10. The layout/design and GUI (including menu handling) have been drastically worsened on several occasions.
  11. Many of the problems with Firefox can be remedied with “Classic Theme Restorer” (an absolute life-saver) or similar “user empowering” add-ons. Unfortunately, these all use the “XUL-framework”*, which Firefox has decided to discontinue. There is a new framework for add-ons, but it does not support this type of functionality (whether “yet” or “ever” is not yet clear). Many of the most popular add-ons, including “Classic Theme Restorer”, will therefore not be able to provide the full scope of functionality and at least some of them, again including “Classic Theme Restorer”, will be discontinued by their developers when XUL is turned off.

    *In a twist, XUL was once considered a major selling point for Firefox.

    My poor experiences with Firefox and the absurd attitudes of the Firefox developers might have made me paranoid—but I cannot suppress the suspicion that this is deliberate, that the add-ons that allow users to alter the default behaviors are viewed as problems, as heretics to burn at the stake.

To this should be added that since the switch from a “normal” versioning scheme to the idiocy of making allegedly major releases every few months*, the feature cramming has increased, with a (very predictable) increase in the number of run time problems. The Firefox makers were convinced that this would turn Firefox from a browser into a super-browser. In reality, this only resulted in hastening its demise—in much the same way that a TV series fighting for its survival ruins the good points it had left and drives away the remaining faithful**. If in doubt, most people who try to jump the shark are eaten…

*I.e. making version jumps of 44 to 45 to 46, instead of 4.4 to 4.5 to 4.6 or even 4.4.0 to 4.4.1 to 4.4.2.

**A topic I have been considering recently and intend to write a blog post on in the close future.

Sadly, the delusional author of the discussed article actually makes claims like “Firefox is losing despite being a great browser, and getting better all the time.”—turning the world on its head.

Excursion on the competition:

Unfortunately, Firefox could still be the lesser evil compared to the competitors. Chrome/Chromium, e.g., has many strengths, but configurability and adaptabtility to the user’s needs are not among them; on the contrary, it follows the deplorable school of achieving ease of use through reducing the controllable feature set—the equivalent of Apple’s infamous one-buttoned mouse. Chrome is entirely out of the question for anyone concerned with privacy; while its open-source sibling chromium (in my possibly incorrect opinion) trails Chrome in other regards. I have not tried Opera for years; but combining the old downwards trend (cf. above) with the highly criticized platform shift that almost killed it, I am not optimistic. Internet Explorer and Edge are not worthy of discussion—and are Windows only to begin with. Safari, I admit, I have never used and have no opinion on; however, it is Mac only and my expectations would be low, seeing that Apple has pioneered many of the negative trends in usability that plague today’s software. Looking at smaller players, I have tried possibly a dozen over the years. Those that have been both mature and user-friendly have been text-based and simply not worked very well with many modern web sites/designs, heavy in images and JavaScript; most others have either been too minimalistic or too immature. A very interesting concept is provided by uzbl, which could, on paper, give even the most hard-core user the control he needs—but this would require a very considerable own effort, which could turn out be useless if the limited resources of uzbl dry up.

Excursion on the decline of open source:

It used to be that open-source software was written by the users, for the users; that the developers were steeped in the Unix tradition of software development; that they were (on average) unusually bright and knowledgeable; … Today, many open-source projects (e.g. Firefox, Libre-/OpenOffice, many Linux Desktop environments) approach software development just like the commercial firms do, with an attitude that the user should be disenfranchised and grateful for whatever features the projects decided that he should like; quality is continually sacrificed in favour of feature bloat (while central features are often still missing…); many of the developers have grown up on Windows or Mac and never seen anything better; … Going by the reasoning used by many Firefox developers in their bug tracking tool, Firefox appears to have found more than its share of people who should not be involved in software development at all, having poor judgment and worse attitudes towards users.

Excursion on Pulse:

(Disclaimer: 1. The below is an incomplete version of an intended longer analysis. 2. At the time the below was written, I had a few browser tabs open with references or the opinions of others that I had intended to include. Unfortunately, these went missing in a Firefox crash…)

The reasoning is highly suspect: Yes, supporting two different sound systems can be an additional strain on resources, but this decision is just screwed up. Firstly, they picked the wrong candidate: Pulse is extremely problematic and malfunctioning so often that I would make the blanket recommendation to de-install it and use Alsa on almost any Linux system. Moreover, Pulse is not a from-scratch-system: It is an add-on on Alsa and any system using Pulse must also have Alsa installed—but any system can use Alsa without having Pulse. Not only will more users have access (or potential access) to Alsa, but good software design tries to stick with the smallest common denominator to the degree possible. Secondly, at least one abstraction already exist that is able to abstract multiple sound systems on Linux (SDL; in addition, I am semi-certain that both Alsa and Pulse provides backwards compatibility for the older OSS, which could have been used as a workaround). Thirdly, if none had existed, the proper Open Source way would have been to create one. Fourthly, a browser maker who tries to dictate what sound system a user should use have his priorities wrong in an almost comically absurd manner. (What is next? KDE only? Kaspersky only? Asus only?) Notably, there are very many Linux users who have made a very deliberate decision not to burden their systems with Pulse—and have done so for very good reasons*.

*Including how error prone it is, a too-high latency for many advanced sound users, the wish for a less bloated system, or Pulse’s straying too far from the classical principles behind Unix and Open Source software. Do an Internet search for more details on its controversy.

A particular annoyance is that the decision is partly justified by the claim that statistics gathered by Firefox’s phone-home functionality would indicate that hardly anyone used Alsa—which is extremely flawed, because many Linux distributions and individual educated users disable this phone-home functionality as a matter of course. Since the users who have a system with phone-home enabled are disproportionally likely to be unlucky/careless/stupid enough to also use Pulse, the evidence value is extremely limited.

Written by michaeleriksson

July 26, 2017 at 9:51 pm

Focus stealing—one of the deadly sins of software

leave a comment »

Experimenting with the (currently very immature) browser Aroraw, I re-encountered one of the deadly sins of software development: Presumptuous and unnecessary focus stealingw.

While I, as a Linux user, am normally not met with many instances of this sin, they are the more annoying when they do occur. Notably, they almost exclusively happen when I am off doing something completely unrelated on a different virtual desktopw, with the deliberate intention of finishing one thing and then revisiting the (as it eventually turns out) focus-stealing application once I am done or in five minutes. This re-visiting would include checking any results, answering any queries, giving confirmations, whatnot. Instead, I am pulled back to the focus-stealer mid-work, my concentration is disrupted, I have to switch my own (mental) focus to something new in a disruptive manner, and generally feel as if someone has teleported me from a (typically pleasant) situation to another (typically unpleasant).

There are other very good reasons never to steal focus, including that a typing or mouse-clicking user can accidentally cause an unwanted action to be taken. Consider, e.g., the user who is typing in a document, hits the return key—and sees the return being caught by a focus-stealing confirmation window, which interprets the return key as confirmation. In some cases, the user would have confirmed anyway, but in others he would not—and sometimes the results can be down-right disastrous.

Focus stealing is stealing: If an application steals focus, it takes something that is not its to take. Such acts, just as with physical property, must be reserved for emergencies and duress. Normally criminal acts can be allowable e.g. if they are needed to avert immediate physical danger; in the same way, focus stealing can be allowed for notifications of utmost importance, e.g. that the computer is about to be shut-down and that saving any outstanding work in the next thirty seconds would be an extremely good idea. Cases that are almost always not legitimate include requesting the user’s input; notification that a download is complete or a certain step of a process has been completed; and (above all) spurious focus stealing, without any particular message, because a certain internal state has changed (or similar).

“But some users want to be notified!!!”: This is not a valid excuse—we cannot let non-standard wishes from one group ruin software for another group. If there is a legitimate wish for notification (and most cases of focus stealing I have seen, have not been in situations where such a wish seemed likely to be common—even when allowing for the fact that different users have different preferences) other ways can be found than unwanted focus stealing. Consider e.g. letting the user specifically request focus stealing (more accurately, in this case, “focus taking”) for certain tasks by a checkbox or a configuration option (which, obviously, should be off per default), using a less intrusive notification mechanism (e.g. a notification in a taskbar or an auditory signal; may be on per default, but must be deactivatable), or the sending of an email/SMS (common for very long-running tasks and tasks on other computers; requires separate configuration).

As a particularity, if something requires a user involvement (e.g. a confirmation) before the application can continue, there is still only rarely a reason for focus stealing. Notably, users working on another desktop will almost always check-in regularly; those on the same desktop will usually notice without focus stealing; and there is always the above option of notification by other means. Further, for short-running tasks, it is seldom a problem that the user misses a notification—and he may well have physically left his computer for a long-running task.

Finally, any developer (in particular, those who feel that their own application and situation is important enough to warrant an exception) should think long and hard on the following: He may be about to commit one of the other deadly sins, namely over-estimating how important his application is to others. (Come to think of it, the applications that have stolen focus from me under Linux have usually been those of below average importance—the ones I use every now and then, or only use once or twice to see if they are worth having.)

Written by michaeleriksson

May 13, 2010 at 8:59 pm

Never listen to your customers

leave a comment »

I just ran across an entry on another bloge with a very dangerous message: Never listen to your customers.

The rationale for this very drastic statement: Customers will only be able to tell you what is wrong, not what is possible—and in order to be the best in the long term a more visionary and actively future-shaping approach is needed.

In this, the post is not entirely wrong; however, it is still highly naive. For one thing, listening to the feature wishes of the customers is not the same thing as listening to the needs of the customers. For another, following this advice would perpetuate many of the bad habits of today’s software makers (including those that are responsible for the third-rate software delivered by e.g. Microsoft). A likely incomplete list:

  1. Neglecting bug-fixes and improvements of the existing features, in favour of adding new features.
  2. Featuritis, where feature after feature is added—most of which will eventually not be used by the typical user, and many of which may even be hindrances. (This with a number of side-effects, including greater complexity and more bugs.)
  3. A “pin the donkey” approach to features, where ten features are added and only one eventually sticks.
  4. A thinking that makes marketing more important than quality, to the detriment of the customers.

This is particularly interesting with regard to the sometimes heard claim that open-source software would be lacking in innovation (and, in the rhetorical context, ipso facto be unworthy of attention): Open-source products are typically written by the users for the users, on the basis that if someone has an itch to scratch, he is given the opportunity to scratch it. If someone is hindered by a bug, he can fix it—he does not have to wait until some manager decides that the bug is worthy enough to be fixed. If he lacks a basic feature, he can add it. If he sees a room for innovation, he can innovate. Etc. (It should not be denied that this road is closed to many users because they, unlike the majority of earlier users, lack the programming skills; however, this does not mean that these products are unsuitable for the man on the street—as proved by e.g. Firefox.)

Notably, however, open source is by no means lacking in innovation—it just tends to eclectically add what has been found to work, be needed, and bring benefit to the users. As for true innovations, they have very often been made in a research context, an open-source context, or in a context that today would be considered open source. The “innovators” in the major software makers have very often just copied, modified, or extended an idea that was made by someone else years earlier.

I agree that innovation is necessary to move beyond the borders of the known. Innovation, however, should not be made at the cost of “due diligence” towards existing features. It should not be confused with implementing a dozen ideas and see what sticks. It does not equal success. In fact, the world is full of innovative companies that ultimately failed, because they lacked the marketing, resources, timing, or business prowess to succeed—and, arguably, the greatest benefit of innovation (from a business POV) is just that it allows for better marketing.

Not listening to the users can make for more commercially successful software—it does not make for better software.

Written by michaeleriksson

April 7, 2010 at 2:19 pm