Michael Eriksson's Blog

A Swede in Germany

Archive for September 2017

Disturbing privacy violations

with one comment

Some selected quotes from very disturbing news article :

Muhammad Rabbani*, a director of Cage*, convicted of obstructing counter-terrorism police when stopped at Heathrow

*I have no idea what positions Rabbani and Cage take, or whether they are worthy of support in general. No such support should be implied from this post—what I do support is Rabbani’s right to keep his privacy from a government that discards human rights.

The international director of the campaign group Cage has been found guilty of obstructing counter-terrorism police by refusing to hand over his mobile phone and laptop passwords.

The verdict confirms that police have powers in port stops under schedule seven of the 2000 Terrorism Act to demand access to electronic devices, and refusal to cooperate is a criminal offense.

This is a gross violation of the rights of privacy, especially privacy from snooping governments, that everyone should have. Cf. e.g. a previous article on the topic or a similarly themed satire post.

This type of password demands are absolutely absurd, starting with the fact that there is no particular reason why someone crossing the border* of a country should be exposed to deeper checks than someone currently residing within the same country**. The situation is not analogous to a regular baggage check, because what is stored on a device cannot blow a plane up, cannot be used to perform a hi-jacking, cannot be stabbed in the eye of a flight attendant, …

*Or e.g. traveling domestically.

**Rabbani appears to have been returning to the U.K. In other words, even a highly dubious attempt to filter out “unwanteds” based on device contents would have been misplaced in this specific case.

There is, in fact, very little to gain through such checks and there is virtually no legitimate reason why a check should take place—even assuming that the intrusion on the rights of the passengers/citizens/whatnots were tolerable. Barring an even more unethical installation of malware for the purpose of spying on the device owner in the long term, the most that can be achieved in suitably short time-frame is to briefly look at the device owners emails, desktop files, or similar—and this is not something that e.g. a member of a terrorist organization or organized crime should reasonably fall for. Anything that could have been of legitimate interest can be expected to be too well hidden; what can be found is highly private information that is no business of the government’s whatsoever (e.g. who had a vacation affair with whom—not to mention the whole “intimate picture” problem).

No, in order to have a reasonable chance of finding something legitimate (barring unethical malware…), the device would be needed for several hours or a full copy has to be made—which in turn can take hours and possesses extreme risks where private data is concerned (e.g. through inadvertent leaks). With a proficient hider of information, the hours go into days, might require a specialist in computer forensics, and possibly still turn out to be in vain. If in doubt: Should the government have a high hit ratio, the conclusion of those who have something to hide would be to not carry the information past such check-points… (Instead keeping it in one place and e.g. distributing copies per i2p when needed.)

To boot, such demands for passwords can also violate the rights of third-parties or force the device owner to violate contractual obligations. Consider e.g. the case of a work laptop or the utterly insane idea that people traveling past customs should be obliged to give out their social-media passwords.

This is the type of thing where the public should legitimately take to the streets and demand that their rights and interests are respected—very much unlike e.g. the protests against Donald Trump. (Cf. e.g. a recent post.) Even comparing with his country-based restrictions on visitors, this is an outrage: The country-based restrictions serve a legitimate purpose and can have some success in achieving this purpose; this type of snooping, on the other hand, is useless. The former only rarely infringe on the reasonably expected rights of others*; the latter does so whole-sale.

*We can discuss whether entirely free movement should be allowed, but the fact remains that it is not. A great many countries reserve the right to refuse people entry, and even restrictions based on e.g. country of origin are quite common, and this has been the norm historically. The increasingly free movement we see today is as big a novelty as modern day governmental snooping—outside of dictatorships. Exceptions where a reasonably expected right would be violated occur e.g. when a foreigner residing in the U.S. is refused re-entry after visiting his home country. (Which is a possible outcome of Trump’s suggestions, at least according to his opponents.)

Advertisement

Written by michaeleriksson

September 25, 2017 at 10:14 pm

Follow-up: International Day Against DRM

leave a comment »

As a brief follow-up to my recent post on DRM, a few claims* from a (German) article on a piracy study/>:

*I have not investigated the details myself, and I draw only on this source; however, the source has a very strong reputation—then, again, it is still journalists at work.

  1. The EU commission ordered a study on content piracy in 2015, and later tried to suppress and misrepresent the study.
  2. The found overall negative effects of piracy were small.
  3. Movies saw a loss of 27 legal “transactions” (“Transaktionen”) per 100 illegal. This was dominated by block-busters. (I note, looking back at my original post, that block-busters are a prime target of organized and/or professional pirates, who are hindered far less by DRM than e.g. ordinary users wanting to make a backup copy.)
  4. Music saw no impact—despite music piracy being the favorite industry target for a long time.
  5. Computer games saw a gain of 24 legal transactions: An illegal download increases the chance of a legal purchase.

As for the paradoxical result for computer games, and to a lesser degree music, I would speculate that this is partially a result of an informal trial by prospective consumers: Download a product, check-it out, and then either reject the product or buy it legally. This makes great sense for games, where the total playing time often goes into weeks, sometimes even months; with a movie, many users might see no major point in re-watching even a very good movie, considering the sheer number of new releases, and more than several re-watchings are reserved for the best-of-the-best-of-the-best. Music could be somewhere in between, like the numbers suggest, and there is always the possibility of someone additionally buying other music from the same artist. I also note that in terms of “bang for the buck”, games and music usually fair far better than movies. The authors of the study, according to the above article, mention that computer-game purchases often come with additional perks, e.g. bonus levels.

Written by michaeleriksson

September 22, 2017 at 6:17 pm

The success of bad IT ideas

leave a comment »

I have long been troubled by the many bad ideas that are hyped and/or successful in the extended IT world. This includes things that simply do not make sense, things that are inferior to already existing alternatives, and things that are good for one party but pushed as good for another, …

For instance, I just read an article about Apple and how it is pushing for a new type of biometric identification, “Face ID”—following its existing “Touch ID” and a number of other efforts by various companies. The only positive thing to say about biometric identification is that it is convenient. It is, however, not secure and relying* on it for anything that needs to be kept secure** is extremely foolish; pushing such technologies while misrepresenting the risks is utterly despicable. The main problem with biometric identification is this: Once cracked, the user is permanently screwed. If a password is cracked, he can simply change the password***; if his face is “cracked”****, he can have plastic surgery. Depending on the exact details, some form of hardware or software upgrade might provide a partial remedy, but this brings us to another problem:

*There is, however, nothing wrong with using biometric identification in addition to e.g. a password or a dongle: If someone has the right face and knows the password, he is granted access. No means of authorization is fool proof and combining several can reduce the risks. (Even a long, perfectly random password using a large alphabet could be child’s play if an attacker has the opportunity to install a hidden camera with a good view of the users keyboard.)

**Exactly what type of data and what abilities require what security will depend on the people involved and the details of the data. Business related data should almost always be kept secure, but some of it might e.g. be publicly available through other channels. Private photos are normally not a big deal, but what about those very private photos from the significant other? Or look at what Wikipedia says about Face ID: “It allows users to unlock Apple devices, make purchases in the various Apple digital media stores (the iTunes Store, the App Store, and the iBooks Store), and authenticate Apple Pay online or in apps.” The first might or might not be OK (depending on data present, etc.), the second is not, and the third even less so.

***Depending on what was protected and what abilities came with the password, this might be enough entirely or there might be need for some additional steps, e.g. a reinstall.

****Unlike with passwords, this is not necessarily a case of finding out some piece of hidden information. It can also amount to putting together non-secret pieces of information in such a manner that the biometric identification is fooled. For instance, a face scanner that uses only superficial facial features could be fooled by taking a few photos of the intended victim, using them to re-create the victim’s face on a three-dimensional mask, and then presenting this mask to the scanner. Since its hard to keep a face secret, this scenario amounts to a race between scanner maker and cracker—which the cracker wins by merely having the lead at some point of the race, while the scanner maker must lead every step of the way.

False positives vs. false negatives. It is very hard to reduce false positives without increasing false negatives. For instance, long ago, I read an article about how primitive finger-print* checkers were being extended to not just check the finger print per se but also to check for body temperature: A cold imprint of the finger would no longer work (removed false positive), while a cut-off finger would soon grow useless. However, what happens when the actual owner of the finger comes in from a walk in the cold? Here there is a major risk for a false negative (i.e. an unjustified denial of access). Or what happens if a user of Face ID has a broken nose**? Has to wear bandages until facial burns heal? Is he supposed to wait until his face is back to normal again, before he can access his data, devices, whatnot?

*These morons should watch more TV. If they had, they would have known how idiotic a mere print check is, and how easy it is for a knowledgeable opponent (say the NSA) to by-pass it. Do not expect whatever your lap-top or smart-phone uses to be much more advanced than this. More sophisticated checks require more sophisticated technology, and usually comes with an increase in one or all of cost, space, and weight.

**I am not familiar with the details of Face ID and I cannot guarantee that it will be thrown specifically by a broken nose. The general principle still holds.

Then there is the question of circumvention through abuse of the user: A hostile (say, a robber or a law enforcement agency) could just put the user’s thumb, eye ball, face, whatnot on the detector through use of force. With a password, he might be cowed into surrendering it, but he has the option to refuse even a threat of death, should the data be sufficiently important (say, nuclear launch codes). In the case of law enforcement, I wish to recall, but could be wrong, that not giving out a password is protected by the Fifth Amendment in the U.S., while no such protection is afforded to a finger prints used for unlocking smart-phones.

Another example of a (mostly) idiotic technology is various variations of “cloud”*/** services (as noted recently): This is good for the maker of the cloud service, who now has a greater control of the users’ data and access, has a considerable “lock in” effect, can forget about problems with client-side updates and out-of-date clients, … For the users? Not so much. (Although it can be acceptable for casual, private use—not enterprise/business use, however.) Consider, e.g., an Office-like cloud application intended to replace MS Office. Among the problems in a comparison, we have***:

*Here I speak of third-party clouds. If an enterprise sets up its own cloud structures and proceeds with sufficient care, including e.g. ensuring that own servers are used and that access is per Intranet/VPN (not Internet), we have a different situation.

**The word “cloud” it self is extremely problematic, usually poorly defined, inconsistently used, or even used as a slap-on endorsement to add “coolness” to an existing service. (Sometimes being all-inclusive of anything in the Internet to the point of making it meaningless: If I have a virtual server, I have a virtual server. Why would I blabber about cloud-this and cloud-that? If I access my bank account online, why should I want to speak of “cloud”?) Different takes might be possible based on what exact meaning is intended resp. what sub-aspect is discussed (SOA interactions between different non-interactive applications, e.g.). While I will not attempt an ad hoc definition for this post, I consider the discussion compatible with the typical “buzz word” use, especially in a user-centric setting. (And I base the below on a very specific example.)

***With some reservations for the exact implementation and interface; I assume access/editing per browser below.

  1. There are new potential security holes, including the risk of a man-in-the-middle attack and of various security weaknesses in and around the cloud tool (be they technical, organizational, “social”, whatnot). The latter is critical, because the user is forced to trust the service provider and because the probability of an attack is far greater than for a locally installed piece of software.
  2. If any encryption is provided, it will be controlled by the service provider, thereby both limiting the user and giving the service provider opportunities for abuse. (Note e.g. that many web-based email services have admitted to or been caught at making grossly unethical evaluations of private emails.) If an extra layer of encryption can at all be provided by the user, this will involve more effort. Obviously, with non-local data, the need for encryption is much higher than for local data.
  3. If the Internet is not accessible, neither is the data.
  4. If the service provider is gone (e.g. through service termination), so is the data.
  5. If the user wishes to switch provider/tool/whatnot, he is much worse off than with local data. In a worst case scenario, there is neither a possibility to down-load the data in a suitable form, nor any stand-alone tools that can read them. In a best case scenario, he is subjected to unnecessary efforts.
  6. What about back-ups? The service provider might or might not provide them, but this will be outside the control of the user. At best, he has a button somewhere with “Backup now!”, or the possibility to download data for an own back-up (but then, does he also have the ability to restore from that data?). Customizable backup means will not be available and if the service provider does something wrong, he is screwed.
  7. What about version control? Notably, if I have a Git/SVN/Perforce/… repository for everything else I do, I would like my documents there, not in some other tool by the service provider—if one is available at all.
  8. What about sharing data or collaborating? Either I will need yet another account (if the service provider supports this at all) for every team member or I will sloppily have to work with a common account.

To boot, web-based services usually come with restrictions on what browsers, browser versions, and browser settings are supported, forcing additional compromises on the users.

Yet another example is Bitcoin: A Bitcoin has value simply for the fact that some irrational people feel that it should have value and are willing to accept it as tender. When that irrationality wears off, they all become valueless. Ditto if Bitcoin is supplanted by another variation on the same theme that proves more popular.

In contrast, fiat money (e.g. the Euro or the modern USD) has value because the respective government enforces it: Merchants, e.g., are legally obliged, with only minor restrictions, to accept the local fiat money. On the outside, a merchant can disagree about how much e.g. a Euro should be worth in terms of whatever he is selling, and raise his prices—but if he does so by too much, a lack of customers will ruin him.

Similarly, older currencies that were on the gold (silver, whatnot) standard, or were actually made of a suitable metal, had a value independent of themselves and did not need an external enforcer or any type of convention. True, if everyone had suddenly agreed that gold was severely over-valued (compared to e.g. bread), the value of a gold-standard USD would have tanked correspondingly. However, gold* is something real, it has practical uses, and it has proved enduringly popular—we might disagree about how much gold is worth, but it indisputably is worth something. A Bitcoin is just proof that someone, somewhere has performed a calculation that has no other practical use than to create Bitcoins…

*Of course, it does not have to be gold. Barring practical reasons, it could equally be sand or bread. The money-issuing bank guarantees to at any time give out ten pounds of sand or a loaf of bread for a dollar—and we have the sand resp. bread standard. (The gold standard almost certainly arose due to the importance of gold coins and/or the wish to match non-gold coins/bills to the gold coins of old. The original use of gold as a physical material was simply its consistently high valuation for a comparably small quantity.)

As an aside, the above are all ideas that are objectively bad, no matter how they are twisted and turned. This is not to be confused with some other things I like to complain about, e.g. the idiocy of various social media of the “Please look at my pictures from last night’s party!” or “Please pay attention to me while I do something we all do every day!” type. No matter how hard it is for me to understand why there is a market for such services, it is quite clear that the market is there and that it is not artificially created. Catering to that market is legitimate. In contrast, in as far as the above hypes have a market, it is mostly through people being duped.

(However, if we are more specific, I would e.g. condemn Facebook as an attempt to create a limiting and proprietary Internet-within-the-Internet, and as having an abusive agenda. A more independent build-your-own-website kit, possibly in combination with RSS or an external notification or aggregation service following a standardized protocol would be a much better way to satisfy the market from a user, societal, and technological point of view.)

Written by michaeleriksson

September 18, 2017 at 11:37 pm

Here we go again… (Jason Stockley trial and riots)

with 3 comments

Apparently, there has been another acquittal of a White guy who killed a Black guy—and another riot…

This ties in with several of my previous posts, including my recent thoughts around Charlottesville controversy and my (considerably older) observations around the Zimmerman—Martin-tragedy.

What is deplorable here is not the killing or the acquittal (see excursion below), but the utter disregard for the rights of others, for the justice system, and (in a bigger picture) democratic processes that is demonstrated again, and again, and again by certain (at least partially overlapping) groups, including parts of the Black movements, factions of Democrat supporters, and Leftist extremists (including self-appointed anti-fascists, notably various Antifa organizations, who are regularly worse fascists than the people they verbally and physically attack).

Looking at the U.S. alone, we have atrocious examples like the reactions around the Michael Brown and Treyvon Martin shootings, trials, and verdicts (followed by racially motivated or even racist outrage by large parts of the Black community) or the post-election protests against Donald Trump* (he is elected by a democratic process the one day; starting the next day, long before he even assumed office, there are protesters taking the streets to condemn him as if he was Hitler reincarnate). Of course, there is a more than fair chance that the Charlottesville riots (cf. link above) partially, even largely, fall into this category—here Trump is perfectly correct.

*Can or should we be disappointed, even distraught, when we feel that an important election has gone horribly wrong? Certainly: I would have felt horrible had Hillary Clinton won. (While I could live with Obama.) Would I have taken to the streets and tried to circumvent the democratic processes (had I been in the U.S.)? Hell no! When an election is over, it is over. (Barring election fraud and “hanging chad”-style issues.) Feel free to criticize poor decisions, make alternate suggestions to policy, attack abuse of power or Nixon-/Clintonesque behavior in office, whatnot—but respect the election result!

In Sweden and Germany (cf. the Charlottesville post), it is par for the course for any “right-wing” demonstration to be physically attacked by fanatical Leftists. Or consider the treatment of SD in Sweden. Or consider how, in Germany, immense efforts are taken to destroy the nationalist NPD, while a just as extreme and even more hare-brained descendant of the SED actually sits in parliament, and the far more extreme MLPD, openly calling for a communist revolution, is left in peace… Or take the methods of e.g. feminists* that I have written about so often, where dissenters are arbitrarily censored, unfairly maligned, shouted down, have their opinions grossly distorted, … In fact, at least in Germany, many Leftists seem to think that the way to change peoples’ mind is to make as much noise as possible—with no effort put into forming a coherent argument or presenting actual facts. To take the streets with banners, drums, and empty catch phrases far away from the politicians seems to be the only thing some of them are able to do.

*The old claim family that “If you want to anger a [member of a non-Leftist group], tell him a lie; if you want to anger a [member of a Leftist group] tell him the truth.” may be an exaggeration and over-generalization, but there remains a lot of truth to it. When applied to some sub-groups (notably feminists, the extreme Left, the likes of Antifa, …) it comes very close to being the literal truth. They walk through their lives with a pre-conceived opinion in their heads, blinders on their eyes, and simply cannot handle it, when some piece of contrary information manages to sneak into their restricted field of view.

There is a massive, truly massive, problem with large parts of the Left and its attitude that “if we don’t like it, it must be destroyed by whatever means necessary”—no matter the law, civic rights, democratic values, …

This insanity must be stopped!

Please respect freedom of speech!

Please respect democratic processes!

Please understand how “presumed innocent” and “beyond reasonable doubt” work in a court!

Please look at the actual facts of a matter before exploding in rage!

Please save the riots for true abominations—and direct them solely at the authorities*!

Etc.

*A common thread of (even politically motivated) riots is that they hit innocent third parties worse than the presumed enemies of the rioters, having more in common with random vandalism and violence that with political protest.

Excursion on the acquittal: As I gather from online sources, e.g. [1], [2] the killer was a police officer at the end of a car chase of the deceased, who was a known criminal* on probation—and who had heroin in his car. The exact events at and around the end of the car chase are not entirely clear, but applying, as we must and should, “reasonable doubt”, it is clear that there was nowhere near enough evidence for a conviction for the raised “first-degree murder” charge—even had the police officer been guilty (which we do not know and, basing an opinion on news reports, we do not even have a strong reason to suspect). Under absolutely no circumstance can we arbitrarily apply different standards of proof to different types of crimes (including sex crimes!), to different types of suspects, or based on our personal involvement or pet issues. To boot, we must understand that while e.g. a jury can contain members who have preconceived opinions and personal sym- or antipathies, who fall for peer or press pressure, who are deeply stupid, whatnot, the jury members will usually know far more about the evidence situation than even knowledgeable observers—let alone random disgruntled citizens: If they see things differently than the disgruntled citizen, then the explanation will very often be that they know what they are talking about and that he does not. (As can be seen quite clearly with the Zimmerman trial. Cf. earlier link.)

*An interesting observation is that all or almost all similar cases I have seen, have had a victim or “victim” (depending on the situation) that was not only Black, but also had a criminal history, albeit sometimes petty. This includes Anthony Lamar Smith, Michael Brown, Treyvon Martin, …—even Rodney King, who set his car chase in motion when he tried to hide a parole violation… (Which is not in anyway to defend the excessive violence used and unnecessary cruelty shown by the police in that case.) This is important: These cases have not occurred because of random harassment or (at least exclusive) “racial profiling”—these are current or former criminals, many which actually were engaging in criminal behavior during or immediately prior to the events.

There is a problem here, but it is certainly not the acquittal and almost certainly not the behavior of this specific police officer. Neither is there reason to believe that the killing was racially motivated. Neither is there reason to believe that an innocent man was killed (as might or might not have been the case with Treyvon Martin)—this was a criminal being killed while perpetrating crimes and trying to avoid arrest*. No, the problem is the general thinking within the U.S. justice system that guns are a reasonable early recourse and that it is better to shot first than to be shot. (This could in turn be necessitated by the surrounding society or the attitudes of the criminals, but moving beyond “motivated” is conjectural from my point of view.) Possibly**, use of tranquilizer guns might be a viable option. Possibly**, a rule that guns must only be used against a criminal/suspect himself with a drawn gun could work. Possibly**, a directive makes sense that an attempt must first be made to take someone out by other means*** before a likely lethal shot is allowed to be attempted. Either which way: If that would have been a legitimate cause for a riot, it should have taken place after the actual shooting—in 2011.

*He might not have deserved to die for these crimes—even criminal lives matter. However, there is a world of difference between killing an innocent, or even a hardened criminal, just walking down the street, and killing someone who has just recklessly tried to outrun the police in a car chase. If in doubt, he would almost certainly not have been killed, had he surrendered peacefully in the first place. Notably, without the heroin (and possible other objects) that he criminally possessed, he would have had no obvious reason to run.

**I am too far away and lack relevant experience to make more than very tentative suggestions, and I make no guarantee that any of the mentioned examples would prove tenable.

***Depending on the situation, this could include a tasering, a tackle, a (mostly non-lethal) leg or gun-arm shot, …; possibly, in combination with waiting for an opportunity for a reasonable amount of time. (In this specific case, e.g. that the suspect leaves the car.)

Written by michaeleriksson

September 16, 2017 at 9:39 pm

The absurdities of my current project (and some attempts to draw lessons)

leave a comment »

I have spent roughly two years in a row with my current client, to which can be added at least two stints in the past, working on several related projects. Until the beginning of the latest project, roughly six months ago*, I have enjoyed the cooperation, largely through the over-average amount of thinking needed, but also due to my low exposure to bureaucracy, company politics, and the like. With this project, things have changed drastically, problems with and within the project have turned the experience negative, to the point of repeatedly having negative effects on my private life**, and I definitely do not feel like accepting another extension. I am, in fact, at a point where I want to bang my head against the wall and scream my frustration on a regular basis.

*Giving an exact number is tricky, with the unofficial kick-off and preparations taking place long before the official kick-off, combining a long phase of (often failed or useless, cf. below) requirement clarifications and study of various documents with work on other projects.

**When problems of a “political” nature occur or when my work is hindered by the incompetence or lack of cooperation from others, I often have difficulties letting go of these issues in the evening. This is one of the reasons that I prefer working as a contractor—I am less vested than I was as an employee and can (usually…) take a more relaxed attitude towards internal problems of a given client.

To look at a few problems:

(I apologize for the poor structuring below, but it is almost 1 A.M. in Germany and I want to close this topic.)

  1. The originally communicated live date was end of June. This was completely and utterly unrealistic, which I and several other developers communicated from the start. Eventually, the plan was changed to first quarter 2018—not only much more realistic, but something which allowed us to use a high-quality approach, even undo some of the past damage from earlier projects with a too optimistic schedule*. Unfortunately, some months in, with development already long underway based on this deadline, the schedule was revised considerably again**: Now the actual live date was end of October, with a “pilot” release, necessitating almost all the work to be already completed, several weeks before that…

    *What is often referred to as “technical debt”, because you gain something today and lose it again tomorrow—with usurious and ruinous interest… Unfortunately, technical debt is something that non-developers, including executives and project managers, only rarely understand.

    **I will skip the exact reasoning behind this, especially since there are large elements of speculation in what has been communicated to me. However, it falls broadly in the category of external constraints—constraints, however, that were, or should have been, known quite a bit earlier.

    Queue panic and desperate measures, including scrapping many of the changes we had hoped for. Importantly, because a waterfall* model with a separate test phase was followed, we were forced to the observation that from a “critical path” point of view we were already screwed: The projected deadline could only be reached by reducing the test phase to something recklessly and irresponsibly short. As an emergency measure, in order to use test resources earlier and in parallel with development, the decision was made** to divide the existing and future work tasks into smaller groupings and releases than originally intended. On the upside, this potentially saved the schedule; on the downside, it increased the amount of overall work to be completed considerably. Of course, if we had known about the October deadline at the right time and if we had not believed the earlier claims, this could have been done much more orderly and with far less overhead: We almost certainly would have kept the deadline, had this been what was communicated to begin with. As is, it will take a miracle—indeed, even the official planning has since been adjusted again, to allow for several more weeks.

    *I consider waterfall projects a very bad idea, but I do not make the policy, and we have to do what we have to do within the given frame-work.

    **With my support on a reluctant lesser-of-two-evils basis. Now, I am used to work with smaller “packages” of work and smaller releases in a SCRUM style from many of my Java projects. Generally, I consider this the better way; however, it is not something that can be easily done “after the fact”. To work well, it has to be done in a continual and controlled manner from the very beginning.

    Planner* lessons: Never shorten a dead-line. Be wary of rushing a schedule—even if things get done this time, future problems can ensue (including delays of future projects). Never shorten a dead-line. Do not set deadlines without having discussed the topic in detail with those who are to perform the work—the result will be unrealistic more often than not. Never shorten a dead-line.

    *For the sake of simplicity, I use the term planner as a catch-all that, depending on the circumstances, might refer to e.g. an executive, administrator, project manager, middle manager, … I deliberately avoid “manager”.

    Developer lessons: Do not trust dead-lines. Even in a waterfall project, try* to divide the work into smaller portions from the beginning. It will rarely hurt and it can often be a help.

    *This is not always possible or easy. In this case, there were at least two major problems: Firstly, poor modularization of the existing code with many (often mutual or circular) dependencies, knowledge sharing, lack of abstraction, … which implied that it was hard to make different feature changes without affecting many of the same files or even code sections. Secondly, the fact that the code pipeline was blocked by another project for half an eternity. (I apologize for the unclear formulation: Explaining the details would take two paragraphs.)

  2. During a meeting as early (I believe) as February, the main product manager (and then project manager) claimed that the requirements document would be completed in about a week—it still is not! In the interim, we have had massive problems getting him to actually clarify or address a great many points, often even getting him to understand that something needs to be clarified/addressed. This for reasons (many of which are not his faults) that include his lack of experience, repeated vacations and/or paternity leaves, uncooperative third parties and internal departments that need consulting, too many other tasks for him to handle in parallel, and, to be frank, his not being overly bright.

    A common problem was questions being asked and a promise given that the next version of the requirements document would contain the answer or that something was discussed in a meeting and met with the same promise—only for the next version to not contain a word on the matter. When asked to remedy this, the answer usually was either a renewed (and often re-broken) promise or a request for a list of the open issues—something which should have been his responsibility. Of course, sending a list with open issues never resulted in more than several of the issues actually being resolved—if any…

    As for the requirements document: I asked him early on to send me the updated version with “track changes” (MS Word being prescribed tool) activated either once a week or when there had been major changes. There was a stretch of several months (!) when a new version was promised again and again but never arrived. When it did arrive, there were no “track changes”…

    I recall with particular frustration a telephone conversation which went in an ever repeating circle of (with minor variations):

    Me: Please send me the changes that you have already made. I need something to work from.

    Him: Things could change when I get more feedback. It makes no sense give you my current version.

    Me: I’ll take the risk.

    Him: You can work with the old version. Nothing will change anyway.

    Me: If nothing will change, why can’t you give me your current version.

    Him: Because things could change when I get more feedback. It makes no sense give you my current version.

    (And, yes, I suspect that the problem was that he simply had left the document unchanged for half an eternity, despite knowing of a number of changes that must be made, and did not want to admit this. He likely simply did not have an interim version to send, because the current state of the document was identical or near identical to the older version. Had he sent this document, he would have been revealed—hence weird evasions and excuses that made no sense. But I stress that this is conjecture.)

    The other product managers are better; however, the only one assigned to the project full-time is a new employee who has started from scratch, with the others mostly helping out during vacations (and doing so with a shallower knowledge of the project and its details).

    Some blame falls back on development, specifically me, for being too trusting, especially through not keeping a separate listing of the open points and asked questions—but this is simply not normally needed from a developer. This is typically done by either product or project management—and he was both at the time. See also a below discussion of what happened when we did keep such a list…

    Planner lessons: Make sure that key roles in an important and complex project are filled with competent and experienced people and that they have enough time to dedicate to the project. (Both with regard to e.g. vacations and hours per typical week.) Do not mix roles like product and project manager—if you do, who guards the guards?

    Developer lessons: Do not trust other people to do their jobs. Escalate in a timely manner. Find out where you need to compensate in time.

    Generic/product manager lesson: If you are over-taxed with your assignments, try to get help. If you cannot get help, be honest about the situation. Do not stick your head in the sand and hope that everything will pan out—it probably will not.

  3. After the first few months, a specialist project manager was added to the team. As far as I was concerned*, he had very little early impact. In particular, he did not take the whip to the main product manager the way I had hoped.

    *Which is not automatically to say that he did nothing. This is a large project with many parties involved, and he could conceivably have contributed in areas that I was rarely exposed to, e.g. contacts with third parties.

    Around the time the October deadline was introduced, a second project leader was added and upper management appeared to develop a strong interest in the projects progress. Among the effects was the creation of various Excel sheets, Jira tasks, overviews, estimates, … (Many of which, in as far as they are beneficial, should have been created a lot earlier.) At this point, the main task of project management appeared to be related to project reporting and (unproductive types of) project controlling, while other tasks (notably aiming at making the project run more efficiently and effectively) were not prioritized. For instance, we now have the instruction to book time on various Jira tasks, which is a major hassle due to the sheer number*, how unclearly formulated they are, and how poorly they match the actual work done. To boot, a few weeks ago, we were given the directive that the tasks to book time for the “technical specification” had been closed and that we should no longer use these—weird, seeing that the technical specification is still not even close to being done: Either this means that there will be no technical specification or that the corresponding efforts will be misbooked on tasks that are unrelated (or not booked at all).

    *I counted the tasks assigned to me in Jira last week. To my recollection, I had about ten open tasks relating to actual actions within the project—and fourteen (14!) intended solely to book my time on. Do some work—and then decide which of the fourteen different, poorly described booking tasks should be used… It would have been much, much easier e.g. to just book time on the actual work tasks—but then there would have been harder for project management to make a progress report. In contrast, we originally had three tasks each, reflecting resp. meetings, technical specification, and development. Sure, there were border-line cases, but by and large, I could just make a rough approximation at the end of the day that I had spent X, Y, and Z on these three, clearly separated tasks. To that, I had no objections.

    We also had a weekly project/team meeting scheduled for one hour to report on progress. After we exceeded the time allotted, what was the solution for next week? (Seeing that we are on a tight schedule…) Keep a higher tempo? No. Replace the one big meeting with a daily ten minute mini-briefing? No. Skip the meeting and just have the project manager spend five to ten minutes with each individual developer? No. Increase the allotted time by another half hour? Yes!

    By now we have a third project manager: That makes three of them to two product managers, two testers, and seven developers. Call me crazy, but I find these proportions less than optimal…

    At some stage, the decision was made to keep a list of open questions in Confluence to ensure that these actually were resolved. Of course, the responsibility to fill these pages landed on the developers, even when emails with open items had already repeatedly been sent to product management. With barely any progress with the clarifications noticeable in Confluence, this procedure (originally agreed between development and product management) is suddenly unilaterally revoked by project management: Confluence is not to be used anymore, what is present in Confluence “does not exist”, and all open issues should be moved, again by development, to a new tool called TeamWork. Should the developers spend their time developing or performing arbitrary and/or unnecessary administrative tasks? We are on a deadline here…

    Well, TeamWork is an absolute disaster. It is a web based, cloudstyle tool with a poorly thought-through user interface. It does nothing in the areas I have contact with, which could not be done just as well per Jira, Confluence, and email. Open it in just one browser tab, and my processor moves above 50% usage—where it normally just ticks around no more than a few percent*. To boot, it appears** that the companies data are now resting on a foreigner server, on the Internet, with a considerably reduction in data security, and if the Internet is not there, if the cloud service is interrupted or bankrupted, whatnot, the data could immediately become inaccessible. And, oh yes, TeamWork does not work without JavaScript, which implies that I have to lower my security settings accordingly, opening holes where my work computer could be infected by malware and, in a worst case scenario, the integrity of my current client’s entire Intranet be threatened.

    *My private Linux computer, at the time of writing, does not even breach 1 percent…

    **Going by the URL used for access.

    The whole TeamWork situation has also been severely mishandled: Originally, it was claimed that TeamWork was to be used to solve a problem with communications with several third parties. Whether it actually brings any benefit here is something I am not certain of, but it is certainly not more than what could have been achieved with a 1990s style mailing-list solution (or an appropriately configured Jira)—and it forces said third parties to take additional steps. But, OK, let us say that we do this. I can live with the third party communication, seeing that I am rarely involved and that I do not have to log in to TeamWork to answer a message sent per TeamWork—they are cc-ed per email and answers to these emails are distributed correctly, including into TeamWork it self. However, now we are suddenly supposed to use TeamWork for ever more tasks, including those for which we already have better tools (notably Jira and Confluence). Even internal communication per email is frowned upon, with the schizophrenic situation that if I want to ask a product manager something regard this project, I must use TeamWork, for any other project, I would use email… What is next: No phone calls? I also note that these instruction come solely from project management. I have yet to see any claim from e.g. the head of the IT department or the head of the Software Development sub-department—or, for that matter, of the Product Management department.

    To boot: The main project manager, who appears to be the driving force behind this very unfortunate choice of tool, has written several emails* directed at me, cc-ing basically the entire team, where he e.g. paints a picture of me as a sole recalcitrant who refuses to use the tool (untruthful: I do use it, and I certainly not the only one less than impressed) and claims that it should be obvious that Confluence was out of the picture, because Confluence was intended to replace the requirements document (untruthful, cf. below), so that product management would be saved the effort to update it—but now it was decided that the requirements document should be updated again (untruthful, cf. below); ergo, Confluence is not needed. In reality, the main intention behind Confluence (resp. this specific use, one of many) was to track the open issues (cf. the problems discussed above). Not updating the requirements document was a secondary suggestion on the basis that “since we already have the answers in Confluence, we won’t need to have the requirements document too”. Not even this was actually agreed upon, however, and I always ran the line that everything that was a requirement belonged in the requirements document. To the best of my knowledge, the head of Software Development has consistently expressed the same opinion. There can be no “again” where something has never stopped.

    *Since I cannot access my email account at my client’s from home, I have to be a little vaguer here than I would like.

    In as far, as I have used TeamWork less than I could have, which I doubt, the problem rests largely with him: Instead of sending us a link to a manual (or a manual directly), what does he do? He sends a link to instructional videos, as if we were children and not software developers. There is a load of things to do on this project—wasting time on videos is not one of them. To boot, watching a video with sound, which I strongly suspect would be needed, requires head-phones. Most of the computers on the premises do not have head-phones per default, resulting in an additional effort.

    Drawing lessons in detail here is too complex a task, but I point to the need to hire people who actually know what they are doing; to remember that project management has several aspects, project reporting being just one; and to remember what is truly important, namely getting done on time with a sufficient degree of quality.

    While not a lesson, per se: Cloudstyle tools are only very, very rarely acceptable in an enterprise or professional setting—unless they run on the using companies own servers. (And they are often a bad idea for consumers too: In very many cases, the ones drawing all or almost all benefits compared to ordinary software solutions are the providers of the cloud services—not the users.) If I were the CEO of my client, I would plainly and simply ban the use of TeamWork for any company internal purposes.

  4. Last year four people were hired at roughly the same time to stock up the IT department, two (a developer and a tester) of which would have been important contributors to this project, while the other two (application support) would have helped in reducing the work-load on the developers in terms of tickets, thereby freeing resources for the project. They are all (!) out again. One of these was fired for what appears (beware that I go by the rumor mill) to be political reasons. Two others have resigned for reasons of dissatisfaction. The fourth, the developer, has disappeared under weird circumstances. (See excursion below.) This has strained the resources considerably, made planning that much harder, and left that much more work with us others to complete in the allotted time.

    Planner lesson: Try* to avoid hiring in groups that are large compared to the existing staff**. It is better to hire more continuously, because new employees are disproportionally likely to disappear, and a continuous hiring policy makes it easier to compensate. Keep an eye on new (and old…) employees, to make sure that lack of satisfaction is discovered and appropriate actions taken, be it making them satisfied or starting the search for replacements early. Watch out for troublesome employees and find replacements in time. (Cf. the excursion at the end; I repeatedly mentioned misgivings to his team lead even before the disappearance, but they were, with hindsight, not taken sufficiently seriously.)

    *Due to external constraints, notably budget allocations, this is not always possible (then again, those who plan those aspects should be equally careful). It must also be remembered that the current market is extremely tough for employers in the IT sector—beggars can’t be choosers.

    **Above, we saw (in a rough guesstimate) a 30% increase of the overall IT department, as well as a 100% increase of the application support team and a 50% increase of the QA team.

  5. At the very point where finally everyone intended for the project was allotted full-time and the looming deadline dictated that everyone should be working like a maniac, what happens? The German vacation period begins and everyone and his uncle disappears for two to four weeks… There was about a week were I was the single (!) developer present (and I was too busy solving production problems to do anything for the project…). Product management, with many issues yet to be clarified, was just as thin. There was at least one day, when not one single product manager was present…

    Lessons: Not much, because there is preciously little to be done in Germany without causing a riot. (And I assume that the planning that was done, did already consider vacations, being unrealistic for other reasons.)

Excursion on the missing developer: His case is possibly the most absurd I have ever encountered and worthy of some discussion, especially since it affected the project negatively not just through his disappearance but through the inability to plan properly—not to mention being a contributor to my personal frustration. He was hired to start in September (?) last year. Through the first months, he gave a very professional impression, did his allotted task satisfactorily (especially considering that he was often dropped into the deep end of the pool with little preparation), and seemed to be a genuine keeper. The one problem was that he had not actually done any “real” development (coding and such), instead being focused on solving tickets and helping the test department—and that was not his fault, just a result of how events played out with the on-going projects.

However, at some point in the new year, things grew weirder and weirder. First he started to come late or leave early, or not show up at all, for a variety of reasons including (claimed or real) traffic disturbances and an ailing girl-friend, who had to be driven here-and-there. Taken each on their own, this was not necessarily remarkable, but the accumulation was disturbing. To boot, he often made claims along the lines “it’s just another week, then everything is back to normal”—but when next week came he had some other reason. He also did not take measures to minimize the damage to his employer, e.g. through checking into a hotel* every now-and-then or through cutting down on his lunch breaks**. He especially, barring odd exceptions, did not stay late on the days he came late or come early on the days he left early. In fact, I quite often came earlier and left later than he did—I do not take a lunch break at all… A repeated occurrence was to make a promise for a specific day, and then not keep it. I recall especially a Thursday when we had a production release. He explicitly told me that he would be at work early the following day to check the results, meaning that I did not have to. I slept in, eventually arrived at work, and found that he was still not there. Indeed, he did not show the entire day… I do not recall whether there were any actual problems that morning, but if there were, there was no-one else around with the knowledge to resolve them until my arrival (due to vacations). One day, when he did not have any pressing reason to leave early, he upped and left early. Why? He wanted to go ride his horse…

*Hotels can be expensive, but he had a good job and it is his responsibility to get to and from work in a reasonable manner. Problems in this area are his problems, not his employers.

**In a rough guesstimate, he took an average of 45 minutes a day aside, even on days when he came late or knew that he would leave early. In his shoes, I would simply have brought a sandwich or two, and cut the lunch break down to a fraction on these days. He did not. There was even one day when he came in at roughly 11:30, went to lunch half-an-hour later, and was gone almost an hour… Practically speaking, he started the workday shortly before 13:00… He may or may not have stayed late, but he did not stay late enough to reach an eight hour day—security would have thrown him out too early in the evening…

The claimed problems with his girl-friend grew larger and larger. At some point, he requested a three-day vacation to drive her down to Switzerland for a prolonged treatment or convalescence. He promised that once she was there, there would be a large block of time without any further disturbance. The request was granted—and that was the last we ever saw of him. No, actually to my very, very great surprise, after a long series of sick notes and postponements, he actually showed up for three days, only to disappear again permanently after that, having announced his resignation and a wish to move to Switzerland. During these three days, I briefly talked to him about his situation (keeping fake-friendly in the naive belief that he actually intended to show up for the remainder of his employment) and he claimed that he actually had intended to show up two workdays earlier, feeling healthy, and, in fact, being fully clothed and standing in the door-way, when his girl-friend pointed out that the physician had nominally given him another two days. He then stayed at home, because, by his own claim, he felt that he could use that time better… In this manner, a three-day vacation somehow turned into three workdays—spread over a number of otherwise workless months.

Written by michaeleriksson

September 13, 2017 at 11:58 pm

Taking my grandfather’s axe to the politically correct

with 3 comments

Among the many things that annoy me about the politically correct is their wide-spread inability to differentiate between word and concept/intent/meaning/…. At the same time, I have long been annoyed by the pseudo-paradox of the “grandfather’s axe”. Below I will discuss some related, partially overlapping, points.

One of the most popular “pop philosophy” questions/riddles/paradoxes is (with variations):

This axe belonged to my grandfather. The head has been replaced occasionally and so has the handle. Is it still the same axe?

At the same time, we have the old saying “you cannot cross the same river twice”. How then can it be that I have crossed the Rhine hundreds of times? (Not to mention many crossings of other individual rivers, including the Main and the Isar.)

In the first case, we have a question that simply cannot be resolved by logic, because it is ambiguously formulated; in the second, the apparent contradiction arises out of a very similar ambiguity:

The meaning of “same”* is not defined with sufficient precision, because “same” can be used to refer to several (possibly many) different concepts. When we say “same axe” or “same river” do we mean e.g. that it is the same basic entity, although changed over time, having some constant aspect of identity; or that it is identically the same without any change? Something in between? Looking at the axe example, it might actually be too unrefined to bring the point over, because it only has the two parts (with some reservations for the type of axe) and it might not be obvious that more than one interpretation is reasonable. Consider instead the same example using a T-Ford: Someone has an old T-Ford standing in a barn. His great-grand-parents bought it, and several generations have made sure that it has been kept running over the years, be it through sentimentality, interest in old cars, or hope for a future value increase. By now, every single part** of it has at some point been exchanged. Is it still the same car? If not, when did it cease to be the original car? Similarly, is this still the same hands I am typing with that I used seven years ago***? Fourteen years ago? That I was born with more than 42 years ago?

*Alternatively, the ambiguity could be seen to lie in “axe” and “river”, or a disagreement about what part of an entity carries the identity. In the case of river crossings this might even be the more productive point of attack.

**Assume, for the sake of argument, that this happened a single part at a time and that any part that might have been taken to carry the identity of the car was not changes as a whole in one go—if need be through intervention by a welder.

***Assuming that the common claim holds true that all cells are replaced within a space of seven years. (This is an over-simplification, but could conceivably be true specifically for a hand.)

As is obvious, understanding what someone means by a certain word, means understanding which concept is intended. Conversely, it is in our own best interest to avoid such ambiguities to the best of our abilities*, to be very careful before applying a word in manner that implies a different concept than is normally intended, and to prefer the use of new words when a new differentiation is needed.

*Doing so perfectly is a virtual impossibility.

To exemplify the last point: In today’s world, words like “man”, “woman”, “male”, and “female”, that used to have a clear meaning/make a differentiation in one dimension, can be used for at least two meanings/making a differentiation in one of two dimensions. It is no longer necessarily a matter of whether someone is physically, biologically a man or a woman—but often whether someone self-identifies as man or woman.* Now, this in it self is merely unfortunate and a cause of confusion—the second differentiation should have been done by adding new words. The real problems arise because some groups of politically correct insist** that the new*** meaning is the proper meaning; that other uses would be “sexist”, “discriminatory”, or similar; or, with a wider net, that the concept behind the new meaning is what should dictate discussion.

*For the sake of simplicity, I leave e.g. post-op transsexuals and unusual chromosome combinations out of the discussion.

**See the excursion on “Tolkningsföreträde” below.

***Whether “new” is a good phrasing can be discussed. A possible interpretation of events is that these potential concepts happened to coincide sufficiently in the past that there never was a need to differentiate. If so, which concept or word should be considered new and which old? There might well exist situations where this question cannot be fairly answered or the outcome is a tie. In this specific case, it seems highly plausible to me that e.g. a nineteenth century human would have taken the biological as the sole or very strongly predominant meaning; and the use of “new” seems reasonable. (However, the point is mostly interesting because it gives another example of how tricky word and meaning can be. No reversal of which meaning is old and which is new will change the problems discussed in the main text—on the outside a marginal shift of blame can take place.)

In many cases, words are redefined in such a grotesque manner that I am unable to assume good faith, instead tending towards intellectual dishonesty as the explanation. A prime example is a sometime use of “racism”* to include a strong aspect of having power—with the ensuing (highly disputable) conclusion that black people, as a matter of definition, cannot be racist… At extremes, this can be taken to the point that all white people are, !as a matter of definition!, racist. Similarly, some feminists redefine “rape” in a ridiculously manner, notably to arrive at exaggerated numbers for rape statistics in order to justify their world-view. At the farthest extreme, although thankfully very rarely, I have even seen the claim that if a woman consents today and changes her mind tomorrow (!!!) then she has been raped…

*Generally a very abused word, including e.g. being used as a blanket replacement for “racial” or as a blanket attack against anyone who even contemplates the possibility of racial differences.

Quite often lesser distortions take place (often driven by a general tendency in the overall population), including the artificial limitation of “discrimination” to mean e.g. unlawful or sexist/racist discrimination: Discrimination is generally something good and positive—there are only rare specific types of discrimination that are problematic. Hire someone with a doctorate over a high-school dropout and you have just discriminated—but in the vast majority of circumstances, no reasonable third-party will take offense.

Yet other cases go back to simply not understanding what a word means or through having been so overwhelmed by figurative use that objectively perfectly fine uses are unfairly condemned. There is nothing wrong, e.g., in calling a tribe that still lives in a stone-age society primitive—it is primitive. (In contrast, calling someone with a different set of ideas primitive is either an abuse of language or an insult, depending on preference.) This phenomenon is reflected in the concept of “euphemistic treadmills”, were one word is replaced by a second to avoid demeaning connotations (e.g. through school-yard use), then a third/fourth/fifth/… when the resp. previous word also develops demeaning connotations (or is perceived to have done so). The problem is, of course, not the word it self, or the connotations of the word, but the connotations of the concept—changing the word is a temporary band-aid, and in the end it does more harm than good. To cruel children, e.g., it will not matter whether that other kid is formally classified as a spastic, as a retard, as being “differently abled”—he still remains a freak to them. (Or not, irrespective of the word used.)

The last example brings us to the related issue of word and intent: There is, for instance, nothing inherently racist or otherwise “evil” in the word “Nigger”. The word might very well signal a racist intent (especially with its current stigma), but, if so, it is that intent that is problematic—not the word it self. That “nigger” is not an evil word is, in doubt, proved by its common use by black people without any negative intent, possibly even with a positive intent. Other uses can have yet other intents and implications, including (like here) the purposeful discussion of the word it self. Still, it is quite common that politically correct extremists want to, even are successful in, censoring this word it self in works written when its use was accepted or where its use reflects what a character would realistically have said—not just a negative intent, or even an “outdated stereotype”*, but the word it self. This to the point that similar attempts have been directed at the cognate Swedish word “neger”, which never had any of the implications or the stigma that “nigger” had, nor its historical background**—until some point in (possibly) the eighties where it suddenly grew more and more “offensive”. (No doubt under the direct influence of the, strictly speaking irrelevant, U.S. situation.) Similarly, “bitch”*** is not inherently sexist: There is nothing harmful in my referring to my dearest childhood friend, Liza, as a bitch—it is an objectively true and value-free statement****.

*I strongly disagree with any later interventions into literature, even children’s literature like “Tom Sawyer” or the various “Dr. Dolittle” books, considering them a fraud towards the readers and a crime against the original author. However, that is a different topic and censoring based merely on words is more obviously wrong with less room for personal opinion.

**In my strong impression, even “nigger” only took on its negative connotations over time: There does not seem to have been an original thought-process of “I hate those damn blackies and I want to demean them; ergo, I’ll call them `niggers’.”. Instead, as in an earlier paragraph, the word was in use, just like “lamp”, by people by a certain attitude, speaking with a certain intent, and that intent over time came to dominate the connotations. However, there was at least a somewhat rational and understandable process in the U.S.—in Sweden, it was just an arbitrary decision by some group of political propagandists.

***To boot, “bitch” (and many other words) do not necessarily fall into such categories, because they do not necessarily make statements about e.g. women in general. Often, they are simply sex (or whatnot) specific insults used to refer to an individual woman. Similarly, “son of a bitch” is usually simply a sex specific insult for men. A rare instance when “bitch” could be seen as sexist is when referring to a man as a bitch (“Stop crying, you little bitch!”), because this could be seen to express that his behavior is simultaneously negative and feminine (“only weak women cry—are you a woman?”).

****She was, after all, the family dog…

Excursion on “Tolkningsföreträde”: A very common problem in Sweden is the incessant assumption by groups of politically correct, feminists, …, that they have tolkningsföreträde—originally a legal term, assigning a certain entity the right of interpretation of e.g. a contract in case of disagreement. (I am not aware of a similar term in e.g. the U.S. legal system, but it might well exist. A similar metaphorical application does not appear to present, however, even if the same attitude often is.) Its their way or the high way: They decide what a word should mean. They decide what is sexism. They decide what is acceptable. Etc. Have the audacity to question this right, even by pointing to the possibility of another interpretation or by pointing out that their use does not match the established one, and what happens: You (!) are accused of demanding tolkningsföreträde… (And, yes, they appear to be entirely and utterly unaware of the hypocrisy in this—or possibly they use the claim as a deliberately intellectually dishonest means of undermining opponents: I sometimes find it hard to resist the thought of there being some form of commonly taken course or set of guide-lines for the politically correct in how to sabotage one’s opponents…)

Written by michaeleriksson

September 10, 2017 at 9:09 pm

A few thoughts on franchises and sequels

with 5 comments

Being home with a cold, I just killed some time finally watching “Dead Men Tell No Tales”, the fifth and latest installment in the “Pirates of the Caribbean” movie series.

I am left pondering the dilemma of when a franchise should call it quits (from a non-monetary view point–we all know Disney):

On the one hand, this is by no means a bad movie—without the comparison with some of its predecessors, I might have given a very favorable review within its genre. On the other, it is still a pale imitation of what the first three* movies (especially the first) was. It attempts the same type of banter and humor, but does so with less skill. It has a very similar set of characters, some shared, some molded after old characters, but both the character development and the casting is considerably weaker**. The plot is just another variation of the predecessors’, without any real invention. The music is driven by the same main theme, but otherwise the score and overall music is not in anyway noteworthy. (And the main theme is almost annoyingly repetitive when used for so many movies—a key feature of good music is variation.) Etc.

*I have only seen the fourth on one occasion, and my recollection is too vague for a comparison. However, while the first three were basically a trilogy with more-or-less the same cast and characters and a semi-continuous overall story, the fourth and fifth movies were stand-alone efforts with only a partial sharing of characters and (likely) considerably less resources.

**The first movie was absolutely amazing in this regard: Most movies would consider themselves lucky to have even one of Jack Sparrow (sorry, CAPTAIN Jack Sparrow), Will Turner, Elizabeth Swann, Captain Barbossa, or even Commander Norrington; and the quality continued into the smaller parts. The second and third followed suit. In the fifth, Will and Elizabeth have cameo appearances and the vacuum is filled by imitation characters that compare to the originals as glass does to diamond. Sparrow has gone from dashing, cunning, and comedic to just comedic; while the old comedic-relief characters are pushed to the margin. Norrington is long gone and while there is a British commander of some form, he is entirely unremarkable and has very little screen time. The new villain is just a lesser re-hash of (the undead version of) Barbossa and Davy Jones. Barbossa himself remains strong and has an unexpected character twist, but he worked better as the original villain than he ever did as a non-villain.

In the end, I consider the two-or-so hours well spent, but chances are that I will watch the first movie again* before I watch the fifth (or fourth) a second time. To boot, the follow-up movies to some degree detract from the earlier movies, and from an artistic point of view, the series would have been better off just ending after the third movie. (Some argue “after the first”, and the second and third do not reach the level of the first; however, there is much less of a distance, more innovation, and less repetitiveness compared to the later movies.)

*I have not kept count of my watchings, but over the years it is definitely more than half a dozen of the first, with number two and three clocking in at two or three less.

Consider the “Matrix” and assume that the sequels would never have been made: There might or might not have been disappointment over the lack of sequels and/or wonderment what the sequels would have been like—but we would not have had the general disappointment over said sequels. (While actually reasonably entertaining, they were nowhere near the first movie and they do introduce knowledge that actually lessens the first movie.) Would it not be better to have the feeling of having missed out on something than not having missed out and being disappointed? Sequels should be made when they can really bring something to the table*—not just because people want more**. The whole “Rocky” franchise contains one noteworthy movie—the first. The rest might have entertained portions of the masses*** and made some people a lot of money, but where was the actual value compared to just making similar movies starting from scratch? “Highlander”**** was utterly ruined by the sequels, which turned the original from an original fantasy movie with something major at stake to a part of a ridiculous “aliens are among us” C-movie franchise.

*Of course, this is not always something that can be known in advance, especially when matters of taste come into it. Often, however, the case is crystal clear.

**The actual decision will unfortunately be made based on the studio (or some similar entity) wanting more.

***Including a younger version of me: At least the whole Rocky/Ivan Drago thing was a thrill the first time around. A later watching as an adult left me unmoved.

****It is quite conceivable that my interest would have dropped through my own development, as with Ivan Drago; however, even that aside, the sequels utterly ruined the original.

When I was a teenager, one of my absolute favorite TV series was “Twin Peaks”. This series was artificially cut short on a cliff-hanger at the end of the second season—and for several years I (and many others—“Twin Peaks” was a big deal at the time) hoped that someone in charge would change his mind and that we would see a third season after all. Time went by, and the possibility became unrealistic in light of actors aging or even dying. Now, not years but decades later, there is a third season … of sorts. Based on the first three episodes*, it is a disappointment. Firstly, and almost necessarily, it does not pick up where season two ended, but roughly as far in the future as time has passed in real life, and most of the story-lines, events, what-ifs, …, have already played out during the gap. Secondly, many of the things that made the original great (at least in my teenage mind) are missing, including (apart from cameos) most of the old characters—and the old characters that remain are, well, old. Possibly, it will turn out to be great in the end, but I doubt it. Even if it does turn out great, it will not be what I once wished for. Does the sequel make sense? Probably not.

*The season is progressed farther, but I have only watched three episodes so far. I will pick it up again later, and reserve my final judgment until I am either through or have given up.

In contrast, the “Evil Dead” movie franchise, of which I had just a mildly positive impression, has come up with a TV series continuation, again playing several decades after the previous installment. It is hilariously entertaining: Funny, good violence, good music, likable characters. OK, the “deeper” and “artistic” values are virtually absent (just as they were in the movies), but for just kicking-back for half-an-hour it is a gem—and it is far ahead of the movies. Sometimes, an unexpected continuation is a good thing… Similarly, it is not unheard of for a weak sequel to be followed by a stronger sequel (e.g. the inexcusable “Psycho III” and the reasonably good “Psycho IV”; but, true, no sequel at all would have been the best for “Psycho”) or even, on rare occasions, for the sequels to better the original (“Tremors”; although the starting point was not that impressive).

Going into a full discussion of all sequels and franchises that could be relevant would take forever (“Star Trek”, “James Bond”, “Doctor Who”, various horror franchises, various super-hero franchises, …). I point, however, to my review of “Star Wars VII” for some discussion of “Star Wars” and the topic of sequels. I further note, concerning one of the very few “serious” examples, that the “The Godfather III” was another case of an actually reasonably good movie that was simply not up to par with the predecessors (and, yes, Sofia Coppola was one of the worst casting choice in history).

As an aside, reboots and remakes are almost always a bad idea, while the move from one medium to another often fails badly and, even when not, only rarely manages to reach the quality, popularity, whatnot, found in the original medium.

Written by michaeleriksson

September 7, 2017 at 4:02 pm