Michael Eriksson's Blog

A Swede in Germany

The success of bad IT ideas

leave a comment »

I have long been troubled by the many bad ideas that are hyped and/or successful in the extended IT world. This includes things that simply do not make sense, things that are inferior to already existing alternatives, and things that are good for one party but pushed as good for another, …

For instance, I just read an article about Apple and how it is pushing for a new type of biometric identification, “Face ID”—following its existing “Touch ID” and a number of other efforts by various companies. The only positive thing to say about biometric identification is that it is convenient. It is, however, not secure and relying* on it for anything that needs to be kept secure** is extremely foolish; pushing such technologies while misrepresenting the risks is utterly despicable. The main problem with biometric identification is this: Once cracked, the user is permanently screwed. If a password is cracked, he can simply change the password***; if his face is “cracked”****, he can have plastic surgery. Depending on the exact details, some form of hardware or software upgrade might provide a partial remedy, but this brings us to another problem:

*There is, however, nothing wrong with using biometric identification in addition to e.g. a password or a dongle: If someone has the right face and knows the password, he is granted access. No means of authorization is fool proof and combining several can reduce the risks. (Even a long, perfectly random password using a large alphabet could be child’s play if an attacker has the opportunity to install a hidden camera with a good view of the users keyboard.)

**Exactly what type of data and what abilities require what security will depend on the people involved and the details of the data. Business related data should almost always be kept secure, but some of it might e.g. be publicly available through other channels. Private photos are normally not a big deal, but what about those very private photos from the significant other? Or look at what Wikipedia says about Face ID: “It allows users to unlock Apple devices, make purchases in the various Apple digital media stores (the iTunes Store, the App Store, and the iBooks Store), and authenticate Apple Pay online or in apps.” The first might or might not be OK (depending on data present, etc.), the second is not, and the third even less so.

***Depending on what was protected and what abilities came with the password, this might be enough entirely or there might be need for some additional steps, e.g. a reinstall.

****Unlike with passwords, this is not necessarily a case of finding out some piece of hidden information. It can also amount to putting together non-secret pieces of information in such a manner that the biometric identification is fooled. For instance, a face scanner that uses only superficial facial features could be fooled by taking a few photos of the intended victim, using them to re-create the victim’s face on a three-dimensional mask, and then presenting this mask to the scanner. Since its hard to keep a face secret, this scenario amounts to a race between scanner maker and cracker—which the cracker wins by merely having the lead at some point of the race, while the scanner maker must lead every step of the way.

False positives vs. false negatives. It is very hard to reduce false positives without increasing false negatives. For instance, long ago, I read an article about how primitive finger-print* checkers were being extended to not just check the finger print per se but also to check for body temperature: A cold imprint of the finger would no longer work (removed false positive), while a cut-off finger would soon grow useless. However, what happens when the actual owner of the finger comes in from a walk in the cold? Here there is a major risk for a false negative (i.e. an unjustified denial of access). Or what happens if a user of Face ID has a broken nose**? Has to wear bandages until facial burns heal? Is he supposed to wait until his face is back to normal again, before he can access his data, devices, whatnot?

*These morons should watch more TV. If they had, they would have known how idiotic a mere print check is, and how easy it is for a knowledgeable opponent (say the NSA) to by-pass it. Do not expect whatever your lap-top or smart-phone uses to be much more advanced than this. More sophisticated checks require more sophisticated technology, and usually comes with an increase in one or all of cost, space, and weight.

**I am not familiar with the details of Face ID and I cannot guarantee that it will be thrown specifically by a broken nose. The general principle still holds.

Then there is the question of circumvention through abuse of the user: A hostile (say, a robber or a law enforcement agency) could just put the user’s thumb, eye ball, face, whatnot on the detector through use of force. With a password, he might be cowed into surrendering it, but he has the option to refuse even a threat of death, should the data be sufficiently important (say, nuclear launch codes). In the case of law enforcement, I wish to recall, but could be wrong, that not giving out a password is protected by the Fifth Amendment in the U.S., while no such protection is afforded to a finger prints used for unlocking smart-phones.

Another example of a (mostly) idiotic technology is various variations of “cloud”*/** services (as noted recently): This is good for the maker of the cloud service, who now has a greater control of the users’ data and access, has a considerable “lock in” effect, can forget about problems with client-side updates and out-of-date clients, … For the users? Not so much. (Although it can be acceptable for casual, private use—not enterprise/business use, however.) Consider, e.g., an Office-like cloud application intended to replace MS Office. Among the problems in a comparison, we have***:

*Here I speak of third-party clouds. If an enterprise sets up its own cloud structures and proceeds with sufficient care, including e.g. ensuring that own servers are used and that access is per Intranet/VPN (not Internet), we have a different situation.

**The word “cloud” it self is extremely problematic, usually poorly defined, inconsistently used, or even used as a slap-on endorsement to add “coolness” to an existing service. (Sometimes being all-inclusive of anything in the Internet to the point of making it meaningless: If I have a virtual server, I have a virtual server. Why would I blabber about cloud-this and cloud-that? If I access my bank account online, why should I want to speak of “cloud”?) Different takes might be possible based on what exact meaning is intended resp. what sub-aspect is discussed (SOA interactions between different non-interactive applications, e.g.). While I will not attempt an ad hoc definition for this post, I consider the discussion compatible with the typical “buzz word” use, especially in a user-centric setting. (And I base the below on a very specific example.)

***With some reservations for the exact implementation and interface; I assume access/editing per browser below.

  1. There are new potential security holes, including the risk of a man-in-the-middle attack and of various security weaknesses in and around the cloud tool (be they technical, organizational, “social”, whatnot). The latter is critical, because the user is forced to trust the service provider and because the probability of an attack is far greater than for a locally installed piece of software.
  2. If any encryption is provided, it will be controlled by the service provider, thereby both limiting the user and giving the service provider opportunities for abuse. (Note e.g. that many web-based email services have admitted to or been caught at making grossly unethical evaluations of private emails.) If an extra layer of encryption can at all be provided by the user, this will involve more effort. Obviously, with non-local data, the need for encryption is much higher than for local data.
  3. If the Internet is not accessible, neither is the data.
  4. If the service provider is gone (e.g. through service termination), so is the data.
  5. If the user wishes to switch provider/tool/whatnot, he is much worse off than with local data. In a worst case scenario, there is neither a possibility to down-load the data in a suitable form, nor any stand-alone tools that can read them. In a best case scenario, he is subjected to unnecessary efforts.
  6. What about back-ups? The service provider might or might not provide them, but this will be outside the control of the user. At best, he has a button somewhere with “Backup now!”, or the possibility to download data for an own back-up (but then, does he also have the ability to restore from that data?). Customizable backup means will not be available and if the service provider does something wrong, he is screwed.
  7. What about version control? Notably, if I have a Git/SVN/Perforce/… repository for everything else I do, I would like my documents there, not in some other tool by the service provider—if one is available at all.
  8. What about sharing data or collaborating? Either I will need yet another account (if the service provider supports this at all) for every team member or I will sloppily have to work with a common account.

To boot, web-based services usually come with restrictions on what browsers, browser versions, and browser settings are supported, forcing additional compromises on the users.

Yet another example is Bitcoin: A Bitcoin has value simply for the fact that some irrational people feel that it should have value and are willing to accept it as tender. When that irrationality wears off, they all become valueless. Ditto if Bitcoin is supplanted by another variation on the same theme that proves more popular.

In contrast, fiat money (e.g. the Euro or the modern USD) has value because the respective government enforces it: Merchants, e.g., are legally obliged, with only minor restrictions, to accept the local fiat money. On the outside, a merchant can disagree about how much e.g. a Euro should be worth in terms of whatever he is selling, and raise his prices—but if he does so by too much, a lack of customers will ruin him.

Similarly, older currencies that were on the gold (silver, whatnot) standard, or were actually made of a suitable metal, had a value independent of themselves and did not need an external enforcer or any type of convention. True, if everyone had suddenly agreed that gold was severely over-valued (compared to e.g. bread), the value of a gold-standard USD would have tanked correspondingly. However, gold* is something real, it has practical uses, and it has proved enduringly popular—we might disagree about how much gold is worth, but it indisputably is worth something. A Bitcoin is just proof that someone, somewhere has performed a calculation that has no other practical use than to create Bitcoins…

*Of course, it does not have to be gold. Barring practical reasons, it could equally be sand or bread. The money-issuing bank guarantees to at any time give out ten pounds of sand or a loaf of bread for a dollar—and we have the sand resp. bread standard. (The gold standard almost certainly arose due to the importance of gold coins and/or the wish to match non-gold coins/bills to the gold coins of old. The original use of gold as a physical material was simply its consistently high valuation for a comparably small quantity.)

As an aside, the above are all ideas that are objectively bad, no matter how they are twisted and turned. This is not to be confused with some other things I like to complain about, e.g. the idiocy of various social media of the “Please look at my pictures from last night’s party!” or “Please pay attention to me while I do something we all do every day!” type. No matter how hard it is for me to understand why there is a market for such services, it is quite clear that the market is there and that it is not artificially created. Catering to that market is legitimate. In contrast, in as far as the above hypes have a market, it is mostly through people being duped.

(However, if we are more specific, I would e.g. condemn Facebook as an attempt to create a limiting and proprietary Internet-within-the-Internet, and as having an abusive agenda. A more independent build-your-own-website kit, possibly in combination with RSS or an external notification or aggregation service following a standardized protocol would be a much better way to satisfy the market from a user, societal, and technological point of view.)

Advertisements

Written by michaeleriksson

September 18, 2017 at 11:37 pm

Here we go again… (Jason Stockley trial and riots)

leave a comment »

Apparently, there has been another acquittal of a White guy who killed a Black guy—and another riot…

This ties in with several of my previous posts, including my recent thoughts around Charlottesville controversy and my (considerably older) observations around the Zimmerman—Martin-tragedy.

What is deplorable here is not the killing or the acquittal (see excursion below), but the utter disregard for the rights of others, for the justice system, and (in a bigger picture) democratic processes that is demonstrated again, and again, and again by certain (at least partially overlapping) groups, including parts of the Black movements, factions of Democrat supporters, and Leftist extremists (including self-appointed anti-fascists, notably various Antifa organizations, who are regularly worse fascists than the people they verbally and physically attack).

Looking at the U.S. alone, we have atrocious examples like the reactions around the Michael Brown and Treyvon Martin shootings, trials, and verdicts (followed by racially motivated or even racist outrage by large parts of the Black community) or the post-election protests against Donald Trump* (he is elected by a democratic process the one day; starting the next day, long before he even assumed office, there are protesters taking the streets to condemn him as if he was Hitler reincarnate). Of course, there is a more than fair chance that the Charlottesville riots (cf. link above) partially, even largely, fall into this category—here Trump is perfectly correct.

*Can or should we be disappointed, even distraught, when we feel that an important election has gone horribly wrong? Certainly: I would have felt horrible had Hillary Clinton won. (While I could live with Obama.) Would I have taken to the streets and tried to circumvent the democratic processes (had I been in the U.S.)? Hell no! When an election is over, it is over. (Barring election fraud and “hanging chad”-style issues.) Feel free to criticize poor decisions, make alternate suggestions to policy, attack abuse of power or Nixon-/Clintonesque behavior in office, whatnot—but respect the election result!

In Sweden and Germany (cf. the Charlottesville post), it is par for the course for any “right-wing” demonstration to be physically attacked by fanatical Leftists. Or consider the treatment of SD in Sweden. Or consider how, in Germany, immense efforts are taken to destroy the nationalist NPD, while a just as extreme and even more hare-brained descendant of the SED actually sits in parliament, and the far more extreme MLPD, openly calling for a communist revolution, is left in peace… Or take the methods of e.g. feminists* that I have written about so often, where dissenters are arbitrarily censored, unfairly maligned, shouted down, have their opinions grossly distorted, … In fact, at least in Germany, many Leftists seem to think that the way to change peoples’ mind is to make as much noise as possible—with no effort put into forming a coherent argument or presenting actual facts. To take the streets with banners, drums, and empty catch phrases far away from the politicians seems to be the only thing some of them are able to do.

*The old claim family that “If you want to anger a [member of a non-Leftist group], tell him a lie; if you want to anger a [member of a Leftist group] tell him the truth.” may be an exaggeration and over-generalization, but there remains a lot of truth to it. When applied to some sub-groups (notably feminists, the extreme Left, the likes of Antifa, …) it comes very close to being the literal truth. They walk through their lives with a pre-conceived opinion in their heads, blinders on their eyes, and simply cannot handle it, when some piece of contrary information manages to sneak into their restricted field of view.

There is a massive, truly massive, problem with large parts of the Left and its attitude that “if we don’t like it, it must be destroyed by whatever means necessary”—no matter the law, civic rights, democratic values, …

This insanity must be stopped!

Please respect freedom of speech!

Please respect democratic processes!

Please understand how “presumed innocent” and “beyond reasonable doubt” work in a court!

Please look at the actual facts of a matter before exploding in rage!

Please save the riots for true abominations—and direct them solely at the authorities*!

Etc.

*A common thread of (even politically motivated) riots is that they hit innocent third parties worse than the presumed enemies of the rioters, having more in common with random vandalism and violence that with political protest.

Excursion on the acquittal: As I gather from online sources, e.g. [1], [2] the killer was a police officer at the end of a car chase of the deceased, who was a known criminal* on probation—and who had heroin in his car. The exact events at and around the end of the car chase are not entirely clear, but applying, as we must and should, “reasonable doubt”, it is clear that there was nowhere near enough evidence for a conviction for the raised “first-degree murder” charge—even had the police officer been guilty (which we do not know and, basing an opinion on news reports, we do not even have a strong reason to suspect). Under absolutely no circumstance can we arbitrarily apply different standards of proof to different types of crimes (including sex crimes!), to different types of suspects, or based on our personal involvement or pet issues. To boot, we must understand that while e.g. a jury can contain members who have preconceived opinions and personal sym- or antipathies, who fall for peer or press pressure, who are deeply stupid, whatnot, the jury members will usually know far more about the evidence situation than even knowledgeable observers—let alone random disgruntled citizens: If they see things differently than the disgruntled citizen, then the explanation will very often be that they know what they are talking about and that he does not. (As can be seen quite clearly with the Zimmerman trial. Cf. earlier link.)

*An interesting observation is that all or almost all similar cases I have seen, have had a victim or “victim” (depending on the situation) that was not only Black, but also had a criminal history, albeit sometimes petty. This includes Anthony Lamar Smith, Michael Brown, Treyvon Martin, …—even Rodney King, who set his car chase in motion when he tried to hide a parole violation… (Which is not in anyway to defend the excessive violence used and unnecessary cruelty shown by the police in that case.) This is important: These cases have not occurred because of random harassment or (at least exclusive) “racial profiling”—these are current or former criminals, many which actually were engaging in criminal behavior during or immediately prior to the events.

There is a problem here, but it is certainly not the acquittal and almost certainly not the behavior of this specific police officer. Neither is there reason to believe that the killing was racially motivated. Neither is there reason to believe that an innocent man was killed (as might or might not have been the case with Treyvon Martin)—this was a criminal being killed while perpetrating crimes and trying to avoid arrest*. No, the problem is the general thinking within the U.S. justice system that guns are a reasonable early recourse and that it is better to shot first than to be shot. (This could in turn be necessitated by the surrounding society or the attitudes of the criminals, but moving beyond “motivated” is conjectural from my point of view.) Possibly**, use of tranquilizer guns might be a viable option. Possibly**, a rule that guns must only be used against a criminal/suspect himself with a drawn gun could work. Possibly**, a directive makes sense that an attempt must first be made to take someone out by other means*** before a likely lethal shot is allowed to be attempted. Either which way: If that would have been a legitimate cause for a riot, it should have taken place after the actual shooting—in 2011.

*He might not have deserved to die for these crimes—even criminal lives matter. However, there is a world of difference between killing an innocent, or even a hardened criminal, just walking down the street, and killing someone who has just recklessly tried to outrun the police in a car chase. If in doubt, he would almost certainly not have been killed, had he surrendered peacefully in the first place. Notably, without the heroin (and possible other objects) that he criminally possessed, he would have had no obvious reason to run.

**I am too far away and lack relevant experience to make more than very tentative suggestions, and I make no guarantee that any of the mentioned examples would prove tenable.

***Depending on the situation, this could include a tasering, a tackle, a (mostly non-lethal) leg or gun-arm shot, …; possibly, in combination with waiting for an opportunity for a reasonable amount of time. (In this specific case, e.g. that the suspect leaves the car.)

Written by michaeleriksson

September 16, 2017 at 9:39 pm

The absurdities of my current project (and some attempts to draw lessons)

leave a comment »

I have spent roughly two years in a row with my current client, to which can be added at least two stints in the past, working on several related projects. Until the beginning of the latest project, roughly six months ago*, I have enjoyed the cooperation, largely through the over-average amount of thinking needed, but also due to my low exposure to bureaucracy, company politics, and the like. With this project, things have changed drastically, problems with and within the project have turned the experience negative, to the point of repeatedly having negative effects on my private life**, and I definitely do not feel like accepting another extension. I am, in fact, at a point where I want to bang my head against the wall and scream my frustration on a regular basis.

*Giving an exact number is tricky, with the unofficial kick-off and preparations taking place long before the official kick-off, combining a long phase of (often failed or useless, cf. below) requirement clarifications and study of various documents with work on other projects.

**When problems of a “political” nature occur or when my work is hindered by the incompetence or lack of cooperation from others, I often have difficulties letting go of these issues in the evening. This is one of the reasons that I prefer working as a contractor—I am less vested than I was as an employee and can (usually…) take a more relaxed attitude towards internal problems of a given client.

To look at a few problems:

(I apologize for the poor structuring below, but it is almost 1 A.M. in Germany and I want to close this topic.)

  1. The originally communicated live date was end of June. This was completely and utterly unrealistic, which I and several other developers communicated from the start. Eventually, the plan was changed to first quarter 2018—not only much more realistic, but something which allowed us to use a high-quality approach, even undo some of the past damage from earlier projects with a too optimistic schedule*. Unfortunately, some months in, with development already long underway based on this deadline, the schedule was revised considerably again**: Now the actual live date was end of October, with a “pilot” release, necessitating almost all the work to be already completed, several weeks before that…

    *What is often referred to as “technical debt”, because you gain something today and lose it again tomorrow—with usurious and ruinous interest… Unfortunately, technical debt is something that non-developers, including executives and project managers, only rarely understand.

    **I will skip the exact reasoning behind this, especially since there are large elements of speculation in what has been communicated to me. However, it falls broadly in the category of external constraints—constraints, however, that were, or should have been, known quite a bit earlier.

    Queue panic and desperate measures, including scrapping many of the changes we had hoped for. Importantly, because a waterfall* model with a separate test phase was followed, we were forced to the observation that from a “critical path” point of view we were already screwed: The projected deadline could only be reached by reducing the test phase to something recklessly and irresponsibly short. As an emergency measure, in order to use test resources earlier and in parallel with development, the decision was made** to divide the existing and future work tasks into smaller groupings and releases than originally intended. On the upside, this potentially saved the schedule; on the downside, it increased the amount of overall work to be completed considerably. Of course, if we had known about the October deadline at the right time and if we had not believed the earlier claims, this could have been done much more orderly and with far less overhead: We almost certainly would have kept the deadline, had this been what was communicated to begin with. As is, it will take a miracle—indeed, even the official planning has since been adjusted again, to allow for several more weeks.

    *I consider waterfall projects a very bad idea, but I do not make the policy, and we have to do what we have to do within the given frame-work.

    **With my support on a reluctant lesser-of-two-evils basis. Now, I am used to work with smaller “packages” of work and smaller releases in a SCRUM style from many of my Java projects. Generally, I consider this the better way; however, it is not something that can be easily done “after the fact”. To work well, it has to be done in a continual and controlled manner from the very beginning.

    Planner* lessons: Never shorten a dead-line. Be wary of rushing a schedule—even if things get done this time, future problems can ensue (including delays of future projects). Never shorten a dead-line. Do not set deadlines without having discussed the topic in detail with those who are to perform the work—the result will be unrealistic more often than not. Never shorten a dead-line.

    *For the sake of simplicity, I use the term planner as a catch-all that, depending on the circumstances, might refer to e.g. an executive, administrator, project manager, middle manager, … I deliberately avoid “manager”.

    Developer lessons: Do not trust dead-lines. Even in a waterfall project, try* to divide the work into smaller portions from the beginning. It will rarely hurt and it can often be a help.

    *This is not always possible or easy. In this case, there were at least two major problems: Firstly, poor modularization of the existing code with many (often mutual or circular) dependencies, knowledge sharing, lack of abstraction, … which implied that it was hard to make different feature changes without affecting many of the same files or even code sections. Secondly, the fact that the code pipeline was blocked by another project for half an eternity. (I apologize for the unclear formulation: Explaining the details would take two paragraphs.)

  2. During a meeting as early (I believe) as February, the main product manager (and then project manager) claimed that the requirements document would be completed in about a week—it still is not! In the interim, we have had massive problems getting him to actually clarify or address a great many points, often even getting him to understand that something needs to be clarified/addressed. This for reasons (many of which are not his faults) that include his lack of experience, repeated vacations and/or paternity leaves, uncooperative third parties and internal departments that need consulting, too many other tasks for him to handle in parallel, and, to be frank, his not being overly bright.

    A common problem was questions being asked and a promise given that the next version of the requirements document would contain the answer or that something was discussed in a meeting and met with the same promise—only for the next version to not contain a word on the matter. When asked to remedy this, the answer usually was either a renewed (and often re-broken) promise or a request for a list of the open issues—something which should have been his responsibility. Of course, sending a list with open issues never resulted in more than several of the issues actually being resolved—if any…

    As for the requirements document: I asked him early on to send me the updated version with “track changes” (MS Word being prescribed tool) activated either once a week or when there had been major changes. There was a stretch of several months (!) when a new version was promised again and again but never arrived. When it did arrive, there were no “track changes”…

    I recall with particular frustration a telephone conversation which went in an ever repeating circle of (with minor variations):

    Me: Please send me the changes that you have already made. I need something to work from.

    Him: Things could change when I get more feedback. It makes no sense give you my current version.

    Me: I’ll take the risk.

    Him: You can work with the old version. Nothing will change anyway.

    Me: If nothing will change, why can’t you give me your current version.

    Him: Because things could change when I get more feedback. It makes no sense give you my current version.

    (And, yes, I suspect that the problem was that he simply had left the document unchanged for half an eternity, despite knowing of a number of changes that must be made, and did not want to admit this. He likely simply did not have an interim version to send, because the current state of the document was identical or near identical to the older version. Had he sent this document, he would have been revealed—hence weird evasions and excuses that made no sense. But I stress that this is conjecture.)

    The other product managers are better; however, the only one assigned to the project full-time is a new employee who has started from scratch, with the others mostly helping out during vacations (and doing so with a shallower knowledge of the project and its details).

    Some blame falls back on development, specifically me, for being too trusting, especially through not keeping a separate listing of the open points and asked questions—but this is simply not normally needed from a developer. This is typically done by either product or project management—and he was both at the time. See also a below discussion of what happened when we did keep such a list…

    Planner lessons: Make sure that key roles in an important and complex project are filled with competent and experienced people and that they have enough time to dedicate to the project. (Both with regard to e.g. vacations and hours per typical week.) Do not mix roles like product and project manager—if you do, who guards the guards?

    Developer lessons: Do not trust other people to do their jobs. Escalate in a timely manner. Find out where you need to compensate in time.

    Generic/product manager lesson: If you are over-taxed with your assignments, try to get help. If you cannot get help, be honest about the situation. Do not stick your head in the sand and hope that everything will pan out—it probably will not.

  3. After the first few months, a specialist project manager was added to the team. As far as I was concerned*, he had very little early impact. In particular, he did not take the whip to the main product manager the way I had hoped.

    *Which is not automatically to say that he did nothing. This is a large project with many parties involved, and he could conceivably have contributed in areas that I was rarely exposed to, e.g. contacts with third parties.

    Around the time the October deadline was introduced, a second project leader was added and upper management appeared to develop a strong interest in the projects progress. Among the effects was the creation of various Excel sheets, Jira tasks, overviews, estimates, … (Many of which, in as far as they are beneficial, should have been created a lot earlier.) At this point, the main task of project management appeared to be related to project reporting and (unproductive types of) project controlling, while other tasks (notably aiming at making the project run more efficiently and effectively) were not prioritized. For instance, we now have the instruction to book time on various Jira tasks, which is a major hassle due to the sheer number*, how unclearly formulated they are, and how poorly they match the actual work done. To boot, a few weeks ago, we were given the directive that the tasks to book time for the “technical specification” had been closed and that we should no longer use these—weird, seeing that the technical specification is still not even close to being done: Either this means that there will be no technical specification or that the corresponding efforts will be misbooked on tasks that are unrelated (or not booked at all).

    *I counted the tasks assigned to me in Jira last week. To my recollection, I had about ten open tasks relating to actual actions within the project—and fourteen (14!) intended solely to book my time on. Do some work—and then decide which of the fourteen different, poorly described booking tasks should be used… It would have been much, much easier e.g. to just book time on the actual work tasks—but then there would have been harder for project management to make a progress report. In contrast, we originally had three tasks each, reflecting resp. meetings, technical specification, and development. Sure, there were border-line cases, but by and large, I could just make a rough approximation at the end of the day that I had spent X, Y, and Z on these three, clearly separated tasks. To that, I had no objections.

    We also had a weekly project/team meeting scheduled for one hour to report on progress. After we exceeded the time allotted, what was the solution for next week? (Seeing that we are on a tight schedule…) Keep a higher tempo? No. Replace the one big meeting with a daily ten minute mini-briefing? No. Skip the meeting and just have the project manager spend five to ten minutes with each individual developer? No. Increase the allotted time by another half hour? Yes!

    By now we have a third project manager: That makes three of them to two product managers, two testers, and seven developers. Call me crazy, but I find these proportions less than optimal…

    At some stage, the decision was made to keep a list of open questions in Confluence to ensure that these actually were resolved. Of course, the responsibility to fill these pages landed on the developers, even when emails with open items had already repeatedly been sent to product management. With barely any progress with the clarifications noticeable in Confluence, this procedure (originally agreed between development and product management) is suddenly unilaterally revoked by project management: Confluence is not to be used anymore, what is present in Confluence “does not exist”, and all open issues should be moved, again by development, to a new tool called TeamWork. Should the developers spend their time developing or performing arbitrary and/or unnecessary administrative tasks? We are on a deadline here…

    Well, TeamWork is an absolute disaster. It is a web based, cloudstyle tool with a poorly thought-through user interface. It does nothing in the areas I have contact with, which could not be done just as well per Jira, Confluence, and email. Open it in just one browser tab, and my processor moves above 50% usage—where it normally just ticks around no more than a few percent*. To boot, it appears** that the companies data are now resting on a foreigner server, on the Internet, with a considerably reduction in data security, and if the Internet is not there, if the cloud service is interrupted or bankrupted, whatnot, the data could immediately become inaccessible. And, oh yes, TeamWork does not work without JavaScript, which implies that I have to lower my security settings accordingly, opening holes where my work computer could be infected by malware and, in a worst case scenario, the integrity of my current client’s entire Intranet be threatened.

    *My private Linux computer, at the time of writing, does not even breach 1 percent…

    **Going by the URL used for access.

    The whole TeamWork situation has also been severely mishandled: Originally, it was claimed that TeamWork was to be used to solve a problem with communications with several third parties. Whether it actually brings any benefit here is something I am not certain of, but it is certainly not more than what could have been achieved with a 1990s style mailing-list solution (or an appropriately configured Jira)—and it forces said third parties to take additional steps. But, OK, let us say that we do this. I can live with the third party communication, seeing that I am rarely involved and that I do not have to log in to TeamWork to answer a message sent per TeamWork—they are cc-ed per email and answers to these emails are distributed correctly, including into TeamWork it self. However, now we are suddenly supposed to use TeamWork for ever more tasks, including those for which we already have better tools (notably Jira and Confluence). Even internal communication per email is frowned upon, with the schizophrenic situation that if I want to ask a product manager something regard this project, I must use TeamWork, for any other project, I would use email… What is next: No phone calls? I also note that these instruction come solely from project management. I have yet to see any claim from e.g. the head of the IT department or the head of the Software Development sub-department—or, for that matter, of the Product Management department.

    To boot: The main project manager, who appears to be the driving force behind this very unfortunate choice of tool, has written several emails* directed at me, cc-ing basically the entire team, where he e.g. paints a picture of me as a sole recalcitrant who refuses to use the tool (untruthful: I do use it, and I certainly not the only one less than impressed) and claims that it should be obvious that Confluence was out of the picture, because Confluence was intended to replace the requirements document (untruthful, cf. below), so that product management would be saved the effort to update it—but now it was decided that the requirements document should be updated again (untruthful, cf. below); ergo, Confluence is not needed. In reality, the main intention behind Confluence (resp. this specific use, one of many) was to track the open issues (cf. the problems discussed above). Not updating the requirements document was a secondary suggestion on the basis that “since we already have the answers in Confluence, we won’t need to have the requirements document too”. Not even this was actually agreed upon, however, and I always ran the line that everything that was a requirement belonged in the requirements document. To the best of my knowledge, the head of Software Development has consistently expressed the same opinion. There can be no “again” where something has never stopped.

    *Since I cannot access my email account at my client’s from home, I have to be a little vaguer here than I would like.

    In as far, as I have used TeamWork less than I could have, which I doubt, the problem rests largely with him: Instead of sending us a link to a manual (or a manual directly), what does he do? He sends a link to instructional videos, as if we were children and not software developers. There is a load of things to do on this project—wasting time on videos is not one of them. To boot, watching a video with sound, which I strongly suspect would be needed, requires head-phones. Most of the computers on the premises do not have head-phones per default, resulting in an additional effort.

    Drawing lessons in detail here is too complex a task, but I point to the need to hire people who actually know what they are doing; to remember that project management has several aspects, project reporting being just one; and to remember what is truly important, namely getting done on time with a sufficient degree of quality.

    While not a lesson, per se: Cloudstyle tools are only very, very rarely acceptable in an enterprise or professional setting—unless they run on the using companies own servers. (And they are often a bad idea for consumers too: In very many cases, the ones drawing all or almost all benefits compared to ordinary software solutions are the providers of the cloud services—not the users.) If I were the CEO of my client, I would plainly and simply ban the use of TeamWork for any company internal purposes.

  4. Last year four people were hired at roughly the same time to stock up the IT department, two (a developer and a tester) of which would have been important contributors to this project, while the other two (application support) would have helped in reducing the work-load on the developers in terms of tickets, thereby freeing resources for the project. They are all (!) out again. One of these was fired for what appears (beware that I go by the rumor mill) to be political reasons. Two others have resigned for reasons of dissatisfaction. The fourth, the developer, has disappeared under weird circumstances. (See excursion below.) This has strained the resources considerably, made planning that much harder, and left that much more work with us others to complete in the allotted time.

    Planner lesson: Try* to avoid hiring in groups that are large compared to the existing staff**. It is better to hire more continuously, because new employees are disproportionally likely to disappear, and a continuous hiring policy makes it easier to compensate. Keep an eye on new (and old…) employees, to make sure that lack of satisfaction is discovered and appropriate actions taken, be it making them satisfied or starting the search for replacements early. Watch out for troublesome employees and find replacements in time. (Cf. the excursion at the end; I repeatedly mentioned misgivings to his team lead even before the disappearance, but they were, with hindsight, not taken sufficiently seriously.)

    *Due to external constraints, notably budget allocations, this is not always possible (then again, those who plan those aspects should be equally careful). It must also be remembered that the current market is extremely tough for employers in the IT sector—beggars can’t be choosers.

    **Above, we saw (in a rough guesstimate) a 30% increase of the overall IT department, as well as a 100% increase of the application support team and a 50% increase of the QA team.

  5. At the very point where finally everyone intended for the project was allotted full-time and the looming deadline dictated that everyone should be working like a maniac, what happens? The German vacation period begins and everyone and his uncle disappears for two to four weeks… There was about a week were I was the single (!) developer present (and I was too busy solving production problems to do anything for the project…). Product management, with many issues yet to be clarified, was just as thin. There was at least one day, when not one single product manager was present…

    Lessons: Not much, because there is preciously little to be done in Germany without causing a riot. (And I assume that the planning that was done, did already consider vacations, being unrealistic for other reasons.)

Excursion on the missing developer: His case is possibly the most absurd I have ever encountered and worthy of some discussion, especially since it affected the project negatively not just through his disappearance but through the inability to plan properly—not to mention being a contributor to my personal frustration. He was hired to start in September (?) last year. Through the first months, he gave a very professional impression, did his allotted task satisfactorily (especially considering that he was often dropped into the deep end of the pool with little preparation), and seemed to be a genuine keeper. The one problem was that he had not actually done any “real” development (coding and such), instead being focused on solving tickets and helping the test department—and that was not his fault, just a result of how events played out with the on-going projects.

However, at some point in the new year, things grew weirder and weirder. First he started to come late or leave early, or not show up at all, for a variety of reasons including (claimed or real) traffic disturbances and an ailing girl-friend, who had to be driven here-and-there. Taken each on their own, this was not necessarily remarkable, but the accumulation was disturbing. To boot, he often made claims along the lines “it’s just another week, then everything is back to normal”—but when next week came he had some other reason. He also did not take measures to minimize the damage to his employer, e.g. through checking into a hotel* every now-and-then or through cutting down on his lunch breaks**. He especially, barring odd exceptions, did not stay late on the days he came late or come early on the days he left early. In fact, I quite often came earlier and left later than he did—I do not take a lunch break at all… A repeated occurrence was to make a promise for a specific day, and then not keep it. I recall especially a Thursday when we had a production release. He explicitly told me that he would be at work early the following day to check the results, meaning that I did not have to. I slept in, eventually arrived at work, and found that he was still not there. Indeed, he did not show the entire day… I do not recall whether there were any actual problems that morning, but if there were, there was no-one else around with the knowledge to resolve them until my arrival (due to vacations). One day, when he did not have any pressing reason to leave early, he upped and left early. Why? He wanted to go ride his horse…

*Hotels can be expensive, but he had a good job and it is his responsibility to get to and from work in a reasonable manner. Problems in this area are his problems, not his employers.

**In a rough guesstimate, he took an average of 45 minutes a day aside, even on days when he came late or knew that he would leave early. In his shoes, I would simply have brought a sandwich or two, and cut the lunch break down to a fraction on these days. He did not. There was even one day when he came in at roughly 11:30, went to lunch half-an-hour later, and was gone almost an hour… Practically speaking, he started the workday shortly before 13:00… He may or may not have stayed late, but he did not stay late enough to reach an eight hour day—security would have thrown him out too early in the evening…

The claimed problems with his girl-friend grew larger and larger. At some point, he requested a three-day vacation to drive her down to Switzerland for a prolonged treatment or convalescence. He promised that once she was there, there would be a large block of time without any further disturbance. The request was granted—and that was the last we ever saw of him. No, actually to my very, very great surprise, after a long series of sick notes and postponements, he actually showed up for three days, only to disappear again permanently after that, having announced his resignation and a wish to move to Switzerland. During these three days, I briefly talked to him about his situation (keeping fake-friendly in the naive belief that he actually intended to show up for the remainder of his employment) and he claimed that he actually had intended to show up two workdays earlier, feeling healthy, and, in fact, being fully clothed and standing in the door-way, when his girl-friend pointed out that the physician had nominally given him another two days. He then stayed at home, because, by his own claim, he felt that he could use that time better… In this manner, a three-day vacation somehow turned into three workdays—spread over a number of otherwise workless months.

Written by michaeleriksson

September 13, 2017 at 11:58 pm

Taking my grandfather’s axe to the politically correct

leave a comment »

Among the many things that annoy me about the politically correct is their wide-spread inability to differentiate between word and concept/intent/meaning/…. At the same time, I have long been annoyed by the pseudo-paradox of the “grandfather’s axe”. Below I will discuss some related, partially overlapping, points.

One of the most popular “pop philosophy” questions/riddles/paradoxes is (with variations):

This axe belonged to my grandfather. The head has been replaced occasionally and so has the handle. Is it still the same axe?

At the same time, we have the old saying “you cannot cross the same river twice”. How then can it be that I have crossed the Rhine hundreds of times? (Not to mention many crossings of other individual rivers, including the Main and the Isar.)

In the first case, we have a question that simply cannot be resolved by logic, because it is ambiguously formulated; in the second, the apparent contradiction arises out of a very similar ambiguity:

The meaning of “same”* is not defined with sufficient precision, because “same” can be used to refer to several (possibly many) different concepts. When we say “same axe” or “same river” do we mean e.g. that it is the same basic entity, although changed over time, having some constant aspect of identity; or that it is identically the same without any change? Something in between? Looking at the axe example, it might actually be too unrefined to bring the point over, because it only has the two parts (with some reservations for the type of axe) and it might not be obvious that more than one interpretation is reasonable. Consider instead the same example using a T-Ford: Someone has an old T-Ford standing in a barn. His great-grand-parents bought it, and several generations have made sure that it has been kept running over the years, be it through sentimentality, interest in old cars, or hope for a future value increase. By now, every single part** of it has at some point been exchanged. Is it still the same car? If not, when did it cease to be the original car? Similarly, is this still the same hands I am typing with that I used seven years ago***? Fourteen years ago? That I was born with more than 42 years ago?

*Alternatively, the ambiguity could be seen to lie in “axe” and “river”, or a disagreement about what part of an entity carries the identity. In the case of river crossings this might even be the more productive point of attack.

**Assume, for the sake of argument, that this happened a single part at a time and that any part that might have been taken to carry the identity of the car was not changes as a whole in one go—if need be through intervention by a welder.

***Assuming that the common claim holds true that all cells are replaced within a space of seven years. (This is an over-simplification, but could conceivably be true specifically for a hand.)

As is obvious, understanding what someone means by a certain word, means understanding which concept is intended. Conversely, it is in our own best interest to avoid such ambiguities to the best of our abilities*, to be very careful before applying a word in manner that implies a different concept than is normally intended, and to prefer the use of new words when a new differentiation is needed.

*Doing so perfectly is a virtual impossibility.

To exemplify the last point: In today’s world, words like “man”, “woman”, “male”, and “female”, that used to have a clear meaning/make a differentiation in one dimension, can be used for at least two meanings/making a differentiation in one of two dimensions. It is no longer necessarily a matter of whether someone is physically, biologically a man or a woman—but often whether someone self-identifies as man or woman.* Now, this in it self is merely unfortunate and a cause of confusion—the second differentiation should have been done by adding new words. The real problems arise because some groups of politically correct insist** that the new*** meaning is the proper meaning; that other uses would be “sexist”, “discriminatory”, or similar; or, with a wider net, that the concept behind the new meaning is what should dictate discussion.

*For the sake of simplicity, I leave e.g. post-op transsexuals and unusual chromosome combinations out of the discussion.

**See the excursion on “Tolkningsföreträde” below.

***Whether “new” is a good phrasing can be discussed. A possible interpretation of events is that these potential concepts happened to coincide sufficiently in the past that there never was a need to differentiate. If so, which concept or word should be considered new and which old? There might well exist situations where this question cannot be fairly answered or the outcome is a tie. In this specific case, it seems highly plausible to me that e.g. a nineteenth century human would have taken the biological as the sole or very strongly predominant meaning; and the use of “new” seems reasonable. (However, the point is mostly interesting because it gives another example of how tricky word and meaning can be. No reversal of which meaning is old and which is new will change the problems discussed in the main text—on the outside a marginal shift of blame can take place.)

In many cases, words are redefined in such a grotesque manner that I am unable to assume good faith, instead tending towards intellectual dishonesty as the explanation. A prime example is a sometime use of “racism”* to include a strong aspect of having power—with the ensuing (highly disputable) conclusion that black people, as a matter of definition, cannot be racist… At extremes, this can be taken to the point that all white people are, !as a matter of definition!, racist. Similarly, some feminists redefine “rape” in a ridiculously manner, notably to arrive at exaggerated numbers for rape statistics in order to justify their world-view. At the farthest extreme, although thankfully very rarely, I have even seen the claim that if a woman consents today and changes her mind tomorrow (!!!) then she has been raped…

*Generally a very abused word, including e.g. being used as a blanket replacement for “racial” or as a blanket attack against anyone who even contemplates the possibility of racial differences.

Quite often lesser distortions take place (often driven by a general tendency in the overall population), including the artificial limitation of “discrimination” to mean e.g. unlawful or sexist/racist discrimination: Discrimination is generally something good and positive—there are only rare specific types of discrimination that are problematic. Hire someone with a doctorate over a high-school dropout and you have just discriminated—but in the vast majority of circumstances, no reasonable third-party will take offense.

Yet other cases go back to simply not understanding what a word means or through having been so overwhelmed by figurative use that objectively perfectly fine uses are unfairly condemned. There is nothing wrong, e.g., in calling a tribe that still lives in a stone-age society primitive—it is primitive. (In contrast, calling someone with a different set of ideas primitive is either an abuse of language or an insult, depending on preference.) This phenomenon is reflected in the concept of “euphemistic treadmills”, were one word is replaced by a second to avoid demeaning connotations (e.g. through school-yard use), then a third/fourth/fifth/… when the resp. previous word also develops demeaning connotations (or is perceived to have done so). The problem is, of course, not the word it self, or the connotations of the word, but the connotations of the concept—changing the word is a temporary band-aid, and in the end it does more harm than good. To cruel children, e.g., it will not matter whether that other kid is formally classified as a spastic, as a retard, as being “differently abled”—he still remains a freak to them. (Or not, irrespective of the word used.)

The last example brings us to the related issue of word and intent: There is, for instance, nothing inherently racist or otherwise “evil” in the word “Nigger”. The word might very well signal a racist intent (especially with its current stigma), but, if so, it is that intent that is problematic—not the word it self. That “nigger” is not an evil word is, in doubt, proved by its common use by black people without any negative intent, possibly even with a positive intent. Other uses can have yet other intents and implications, including (like here) the purposeful discussion of the word it self. Still, it is quite common that politically correct extremists want to, even are successful in, censoring this word it self in works written when its use was accepted or where its use reflects what a character would realistically have said—not just a negative intent, or even an “outdated stereotype”*, but the word it self. This to the point that similar attempts have been directed at the cognate Swedish word “neger”, which never had any of the implications or the stigma that “nigger” had, nor its historical background**—until some point in (possibly) the eighties where it suddenly grew more and more “offensive”. (No doubt under the direct influence of the, strictly speaking irrelevant, U.S. situation.) Similarly, “bitch”*** is not inherently sexist: There is nothing harmful in my referring to my dearest childhood friend, Liza, as a bitch—it is an objectively true and value-free statement****.

*I strongly disagree with any later interventions into literature, even children’s literature like “Tom Sawyer” or the various “Dr. Dolittle” books, considering them a fraud towards the readers and a crime against the original author. However, that is a different topic and censoring based merely on words is more obviously wrong with less room for personal opinion.

**In my strong impression, even “nigger” only took on its negative connotations over time: There does not seem to have been an original thought-process of “I hate those damn blackies and I want to demean them; ergo, I’ll call them `niggers’.”. Instead, as in an earlier paragraph, the word was in use, just like “lamp”, by people by a certain attitude, speaking with a certain intent, and that intent over time came to dominate the connotations. However, there was at least a somewhat rational and understandable process in the U.S.—in Sweden, it was just an arbitrary decision by some group of political propagandists.

***To boot, “bitch” (and many other words) do not necessarily fall into such categories, because they do not necessarily make statements about e.g. women in general. Often, they are simply sex (or whatnot) specific insults used to refer to an individual woman. Similarly, “son of a bitch” is usually simply a sex specific insult for men. A rare instance when “bitch” could be seen as sexist is when referring to a man as a bitch (“Stop crying, you little bitch!”), because this could be seen to express that his behavior is simultaneously negative and feminine (“only weak women cry—are you a woman?”).

****She was, after all, the family dog…

Excursion on “Tolkningsföreträde”: A very common problem in Sweden is the incessant assumption by groups of politically correct, feminists, …, that they have tolkningsföreträde—originally a legal term, assigning a certain entity the right of interpretation of e.g. a contract in case of disagreement. (I am not aware of a similar term in e.g. the U.S. legal system, but it might well exist. A similar metaphorical application does not appear to present, however, even if the same attitude often is.) Its their way or the high way: They decide what a word should mean. They decide what is sexism. They decide what is acceptable. Etc. Have the audacity to question this right, even by pointing to the possibility of another interpretation or by pointing out that their use does not match the established one, and what happens: You (!) are accused of demanding tolkningsföreträde… (And, yes, they appear to be entirely and utterly unaware of the hypocrisy in this—or possibly they use the claim as a deliberately intellectually dishonest means of undermining opponents: I sometimes find it hard to resist the thought of there being some form of commonly taken course or set of guide-lines for the politically correct in how to sabotage one’s opponents…)

Written by michaeleriksson

September 10, 2017 at 9:09 pm

A few thoughts on franchises and sequels

leave a comment »

Being home with a cold, I just killed some time finally watching “Dead Men Tell No Tales”, the fifth and latest installment in the “Pirates of the Caribbean” movie series.

I am left pondering the dilemma of when a franchise should call it quits (from a non-monetary view point–we all know Disney):

On the one hand, this is by no means a bad movie—without the comparison with some of its predecessors, I might have given a very favorable review within its genre. On the other, it is still a pale imitation of what the first three* movies (especially the first) was. It attempts the same type of banter and humor, but does so with less skill. It has a very similar set of characters, some shared, some molded after old characters, but both the character development and the casting is considerably weaker**. The plot is just another variation of the predecessors’, without any real invention. The music is driven by the same main theme, but otherwise the score and overall music is not in anyway noteworthy. (And the main theme is almost annoyingly repetitive when used for so many movies—a key feature of good music is variation.) Etc.

*I have only seen the fourth on one occasion, and my recollection is too vague for a comparison. However, while the first three were basically a trilogy with more-or-less the same cast and characters and a semi-continuous overall story, the fourth and fifth movies were stand-alone efforts with only a partial sharing of characters and (likely) considerably less resources.

**The first movie was absolutely amazing in this regard: Most movies would consider themselves lucky to have even one of Jack Sparrow (sorry, CAPTAIN Jack Sparrow), Will Turner, Elizabeth Swann, Captain Barbossa, or even Commander Norrington; and the quality continued into the smaller parts. The second and third followed suit. In the fifth, Will and Elizabeth have cameo appearances and the vacuum is filled by imitation characters that compare to the originals as glass does to diamond. Sparrow has gone from dashing, cunning, and comedic to just comedic; while the old comedic-relief characters are pushed to the margin. Norrington is long gone and while there is a British commander of some form, he is entirely unremarkable and has very little screen time. The new villain is just a lesser re-hash of (the undead version of) Barbossa and Davy Jones. Barbossa himself remains strong and has an unexpected character twist, but he worked better as the original villain than he ever did as a non-villain.

In the end, I consider the two-or-so hours well spent, but chances are that I will watch the first movie again* before I watch the fifth (or fourth) a second time. To boot, the follow-up movies to some degree detract from the earlier movies, and from an artistic point of view, the series would have been better off just ending after the third movie. (Some argue “after the first”, and the second and third do not reach the level of the first; however, there is much less of a distance, more innovation, and less repetitiveness compared to the later movies.)

*I have not kept count of my watchings, but over the years it is definitely more than half a dozen of the first, with number two and three clocking in at two or three less.

Consider the “Matrix” and assume that the sequels would never have been made: There might or might not have been disappointment over the lack of sequels and/or wonderment what the sequels would have been like—but we would not have had the general disappointment over said sequels. (While actually reasonably entertaining, they were nowhere near the first movie and they do introduce knowledge that actually lessens the first movie.) Would it not be better to have the feeling of having missed out on something than not having missed out and being disappointed? Sequels should be made when they can really bring something to the table*—not just because people want more**. The whole “Rocky” franchise contains one noteworthy movie—the first. The rest might have entertained portions of the masses*** and made some people a lot of money, but where was the actual value compared to just making similar movies starting from scratch? “Highlander”**** was utterly ruined by the sequels, which turned the original from an original fantasy movie with something major at stake to a part of a ridiculous “aliens are among us” C-movie franchise.

*Of course, this is not always something that can be known in advance, especially when matters of taste come into it. Often, however, the case is crystal clear.

**The actual decision will unfortunately be made based on the studio (or some similar entity) wanting more.

***Including a younger version of me: At least the whole Rocky/Ivan Drago thing was a thrill the first time around. A later watching as an adult left me unmoved.

****It is quite conceivable that my interest would have dropped through my own development, as with Ivan Drago; however, even that aside, the sequels utterly ruined the original.

When I was a teenager, one of my absolute favorite TV series was “Twin Peaks”. This series was artificially cut short on a cliff-hanger at the end of the second season—and for several years I (and many others—“Twin Peaks” was a big deal at the time) hoped that someone in charge would change his mind and that we would see a third season after all. Time went by, and the possibility became unrealistic in light of actors aging or even dying. Now, not years but decades later, there is a third season … of sorts. Based on the first three episodes*, it is a disappointment. Firstly, and almost necessarily, it does not pick up where season two ended, but roughly as far in the future as time has passed in real life, and most of the story-lines, events, what-ifs, …, have already played out during the gap. Secondly, many of the things that made the original great (at least in my teenage mind) are missing, including (apart from cameos) most of the old characters—and the old characters that remain are, well, old. Possibly, it will turn out to be great in the end, but I doubt it. Even if it does turn out great, it will not be what I once wished for. Does the sequel make sense? Probably not.

*The season is progressed farther, but I have only watched three episodes so far. I will pick it up again later, and reserve my final judgment until I am either through or have given up.

In contrast, the “Evil Dead” movie franchise, of which I had just a mildly positive impression, has come up with a TV series continuation, again playing several decades after the previous installment. It is hilariously entertaining: Funny, good violence, good music, likable characters. OK, the “deeper” and “artistic” values are virtually absent (just as they were in the movies), but for just kicking-back for half-an-hour it is a gem—and it is far ahead of the movies. Sometimes, an unexpected continuation is a good thing… Similarly, it is not unheard of for a weak sequel to be followed by a stronger sequel (e.g. the inexcusable “Psycho III” and the reasonably good “Psycho IV”; but, true, no sequel at all would have been the best for “Psycho”) or even, on rare occasions, for the sequels to better the original (“Tremors”; although the starting point was not that impressive).

Going into a full discussion of all sequels and franchises that could be relevant would take forever (“Star Trek”, “James Bond”, “Doctor Who”, various horror franchises, various super-hero franchises, …). I point, however, to my review of “Star Wars VII” for some discussion of “Star Wars” and the topic of sequels. I further note, concerning one of the very few “serious” examples, that the “The Godfather III” was another case of an actually reasonably good movie that was simply not up to par with the predecessors (and, yes, Sofia Coppola was one of the worst casting choice in history).

As an aside, reboots and remakes are almost always a bad idea, while the move from one medium to another often fails badly and, even when not, only rarely manages to reach the quality, popularity, whatnot, found in the original medium.

Written by michaeleriksson

September 7, 2017 at 4:02 pm

Death of body builder Rich Piana / Follow-up: Reality disconnect

leave a comment »

Had I known one day ago what happened two days ago, I might have been far more specific:

This Friday, a body builder named Rich Piana died, after* suffering a heart attack, hitting his head falling, and spending several weeks in an induced coma.

*I have read several somewhat conflicting accounts today, including those speculating on opiate use and, obviously, the mandatory “steroid overdose”, but the claims above seem to be reasonably main stream, and I will stick to this scenario for now. Beware, however, that this need not be the exact truth of what happened.

Not only was he one of the people I had in mind when I wrote about the extremes some go to, having watched possibly two dozen of his videos, but I am also reasonably certain that he was the one with the insulin-injecting friend*—and his death is a perfect, if very sad, illustration of some of the problems involved when assigning blame:

*Sometimes the weirdest coincidences occur. I recall e.g. watching “Black Swan” the first time, being blown away, reading up a bit afterwards, and seeing a claim about Oscar-winner Natalie Portman. ???When the hell did she win an Oscar??? Mere hours earlier—for her part in … “Black Swan”.

  1. If drugs were involved in his death, they were so in an indirect manner. They might have caused or contributed to the heart attack, but the cause of death was likely brain related. (And if so, likely because of the blow to the head, possibly in combination with a deliberate decision to “turn off the machines”; remember that the modern criterion for death is the brain, not the heart.)
  2. Among the drugs most likely to have been the cause, we do not have steroids—but various forms of growth hormone. I definitely recall one video discussing how his hands, feet, gut, likely even head, had grown due to growth hormones—and that even he more-or-less took it for granted that his heart was affected too. In as far as steroids were involved, well, he apparently started taking them as a teenager and kept it up for several decades…
  3. The heart attack could have been caused by his eating habits, which included a daily pint of Ben and Jerry’s, tons of fast food, and up to twelve meals a day during some phases—eat like that and a heart attack at 46 is no surprise. At the same time, IIRC, he also used a “ketonic diet”, which effectively amounts to starving the body of carbohydrates, and causing its energy processing to change. I am not aware of any known health problems associated with this, but there is a decided possibility that such extremes have side-effects.
  4. He was a positively enormous, almost grotesquely large, man. Where some body builders have upper arms like other people have thighs, he had upper arms like other body builders (!) have thighs. Just carrying that amount of weight must have been an enormous stress on his heart (and knees, and whatnots).
  5. He was quite extreme in a number of other regards too; some, including endless hours spent in the gym, that could possibly have had some relevance; others, including tattoos, that almost certainly did not.

Those interested can find his YouTube account under https://www.youtube.com/user/1DAYUMAY/. Please beware that the possible first impression of “complete moron” is very far from the truth—on closer inspection, he was a fair bit above the average in terms of intelligence.

To boot, it seems that another body builder, Dallas McCarver, died earlier in the week, with speculation that an insulin over-dose was the cause. (To re-iterate: Insulin is indisputably very dangerous. Even for diabetics, it is merely a lesser evil.)

Written by michaeleriksson

August 27, 2017 at 2:38 pm

Reality disconnect

with one comment

I have often, including in some of my latest posts, written about a “reality disconnect”* among e.g. politicians, journalists, feminist propagandists, … where the things that they loudly claim** in public simply do not match reality. And, no, I am not saying that they simply see the world differently than I do (if I did, I might be the problem!): There are many points where main stream science says something very different; where actual statistics are incompatible with the claims; where the statistic might seem superficially compatible, but logically must be interpreted differently than they do***; etc. Not to mention the many cases where a certain set of data allows a handful of conclusions and they just jump to and stick with the one single conclusion that matches their world view, without even considering the possibility that one of the other conclusions could be true.

*I am not certain whether I have ever used this particular phrasing, however.

**What is genuine opinion and what attempts to manipulate the public is often hard or impossible to tell. In the case of high level politicians, I would tend towards manipulation attempts; in the case of journalists, feminists, and lower level party sympathizers (including many bloggers), genuine opinion could be more likely.

***Cf. e.g. the the “77 cents on the dollar” bullshit.

To date, I have been focused on issues relating to e.g. political correctness; however, there are many, many other instances where similar reality disconnects exist.

Take e.g. the issue of doping (in general) and anabolic steroids (in particular)*: The view painted in media and “public information” is invariably that this is a great evil, with numerous unavoidable and debilitating side-effects. The high use among e.g. gym goers is viewed as a major issue. If we look at actual experiences and data a much more nuanced picture arises, up to the point that the overall effect on someones life can be positive.

*Disclaimers: a) The intent is not to paint doping in a positive light, nor even to paint it in a more nuanced light (although I would see it as positive if some of the readers develop a more nuanced view). The purpose is rather to demonstrate the problems of reality disconnect, intellectual dishonesty, lack of critical thinking, etc. The apparent topic matter is just a very suitable example, especially since I would rather not write yet another piece on e.g. feminism. b) The only drugs I take myself are coffee (large quantities), alcohol (small quantities), and the odd aspirin/tylenol/whatnot. (However, I did originally look into the topic with an eye on a possible future use, to compensate for the effects of aging that will eventually manifest. I leave this option open for now.) c) No-one should ever take these types of drugs before knowing what he is doing. (Cf. e.g. item 1 below.)

Consider some common problems with reporting:

  1. Severe problems, let alone disastrous ones, usually go back to people taking drugs without doing the appropriate research (either not researching at all or going by what some guy in the gym said) or people simply being stupid.

    For instance, I once saw a YouTube video speak of a body-builder friend who, as a first time user, had taken a large shot of insulin* on an empty stomach and not eaten anything afterwards. He started to feel weak and, instead of now urgently eating something, went to bed to rest. He fell unconscious and hours of seizures and life in a wheel-chair followed. Notwithstanding that insulin is a drug that is generally considered dangerous, being a “lesser evil” even for actual diabetics, this shows a great degree of ignorance and stupidity: Even five minutes on the Internet would have taught him that it was vital to compensate with carbohydrates; indeed, an at least vague awareness of “insulin shocks” and similar in diabetics should be present in anyone who has even graduated junior high school, and that at least the potential for danger was there would follow immediately. To boot, chances are that a low blood-sugar level would have diminished the results he was hoping for, because one of the main ideas would be to increase the muscles uptake of glycogen, thereby making them larger**—but with low blood sugar…

    *Insulin is used by many (non-diabetic) body builders for the purpose of muscle growth.

    **Whether this actually works, I do not know—the line between science and “bro science” can be hard to detect on the Internet. It is notable, however, that body builders often go for size over strength. Glycogen can contribute to overall muscle size, but the actual “weight pulling” parts of the muscle remain unchanged.

    A common issue is failing to “cycle” (effectively, taking a break from drug use): This is basically the first thing to pick-up when even considering to use drugs—yet many fail to do so and see a health detriment with no off-setting benefit. Cycling has the dual benefit of a) giving the body time off to function normally and to at least partially restore it self from side-effects, and b) to diminish the “tolerance” towards the drug, so that a smaller dose is needed once the break is over: As with e.g. alcohol, the more the body is used to it, the more is needed to get the effect one is looking for—and the greater the damage to those parts of the body that cannot or are slower to adapt. Take a break and the effectiveness of a smaller dose increases again.

  2. Many reported cases go back to misrepresentations of the actual events.

    A particular notable case is Arnold Schwarzenegger’s heart surgery, which has been blamed on steroids. In reality, there is no proof of a connection whatsoever. More over, his version is that it was a congenital problem… (Schwarzenegger could, obviously, be lying, but there is no obvious reason for him to do so: He has already publicly admitted to drug use and what he did was, at the time, perfectly legal.)

    Another is Gregg Valentino and his “exploding arms”: This issue, including the invasive surgery needed, did not stem directly from use of any type of enhancer—it stemmed from being sloppy with injections, especially re-using dirty needles. This sloppiness led to a severe infection, the situation was made worse through amateurish attempts at self-surgery, and the professionals were forced to take drastic measures. With proper handling of injections (possibly even with a sufficiently early visit to a physician) this would not have happened; with such improper handling even medically legitimate injections (e.g. to treat diabetes) would have led to similar problems with equal probability. (With some reservations for where injections for what purpose take place.) To boot, one documentary that I saw claimed that “steroids” ruined his arms—which is not at all the case. What he injected was synthol, a type of oil which is used for localized, artificial optical improvements (often highly unsuccessfully…), which has nothing at all to do with steroids (or any other actual performance enhancer). We could equally claim “dieting ruined her breasts” when a looks obsessed woman suffers a breast-implant burst—a ridiculous non sequitur.

  3. Comparisons are usually made based on extremes. If e.g. a world-class body builder spends twenty years taking steroids, HGH, IGF-1, and whatnot in enormous doses, and develops some form of health problems, this does not automatically mean that an amateur who uses much more moderates doses of a single drug will immediately develop such problems—or necessarily even after twenty years.

    Similarly, much of the public perception on steroids (and PEDs in general) go back to the East-German (and other Eastern European) athletes from the 1980s, in particular the female athletes. What was seen there, however, does not necessarily have much importance for the average gym goer of today, including that we compare with world class athletes on a forced regimen—but also because the knowledge of how drugs work has grown and the drugs available has become more sophisticated. For a man, the partial comparison with women is also misleading, both because the physiological reactions can be different outright and because some effects considered negative for a woman need not be negative for a man. Some, e.g. a deeper voice, might even be seen as positive. (Of course, those that affect health, not just superficialities, are negatives for everyone.)

  4. Effects of various drugs are often conflated, especially through “steroid blaming” (e.g. with Gregg Valentino above). For instance, the so called “roid gut” appears to have little or nothing to do with steroids. Instead, it arises through growth hormones*, which simply make everything grow—including the internal organs. This to the point that some people appear to think that any and all PEDs are steroids.

    *Generally, I have the impression that growth hormones are considerably more problematic than steroids in terms of side-effects. This impression could be wrong, however.

  5. There seems to be a knee-jerk reaction to associate any health problem in a body builder or strength athlete with drugs in general or steroids in particular. However, a proper comparison must look at aggregates and not individual examples: There are plenty of non-drug users who have developed severe health problems, including e.g. the heart, at forty or fifty, even many who have died. The question is therefore not whether such cases occur among drug users—but whether* they are more common and/or more severe. However, this differentiation is not made: Instead it is X died at age 50, he took drugs; ergo, the drugs killed him.

    *The result of such an investigation can very well be that they are more common and/or severe—I am not saying that e.g. steroids are harmless. The matter at hand is one of scientific thinking and intellectual honesty, not the pros and cons of drugs.

    Similarly, there is often a blanket attribution of cause and effect whenever a potential cause is known—and this is not limited to e.g. PEDs. If x percent of the users of a certain drug has a certain problem, we cannot conclude that this drug caused the whole x. Instead, we have to make a comparison with an otherwise comparable control group. If we find that y percent of these have the same problem, then the drug, approximately/statistically speaking, caused x – y percentage points of the cases. Similarly, a smoker who dies of lung cancer did not necessarily develop lung cancer because he smoked: Chances are that he did, and smoking certainly did not help—but he could still be among those caught by another reason, e.g. air pollution. There simply is no guarantee that he would have lived, had he not smoked.

    Strictly speaking, we would also have to make more detailed comparisons in order to judge various issues, but this too is never done (at least outside of scientific research): How is a particular aspect of health influenced by spending hours a day training with weights? By eating twice, thrice, or even four times as much as ordinary people? By using a diet with unusual fat/carbohydrate/protein proportions? By repeatedly “bulking up” and then forcing the body fat down to just a few percent? By weighing a hundred pounds more than normally expected, even be it muscle instead of fat? What if there is some genetic link between an inborn increased ability to build muscle, as would be expected even in a drug-taking top body-builder, and some medical problem? …

  6. Side-effects are often overstated or misreported. For instance, hypogonadism is often cited as a negative side-effect of steroid use: “If you take steroids your testicles will shrink!” Now, this is at least potentially true; however, there is an important addendum that is virtually always left out: They will usually* bounce back again after the steroid use ceases. Not all steroids have the same strength of various side-effects. Some side-effects can be countered by other drugs**, notably where excess estrogen is concerned.

    *Depending on the state of research, where I lack the depth of knowledge, “usually” might be an unnecessary addendum or replaceable by “almost always”. The time frame and the probability will naturally depend on length of use and quantities used; as well as whether the user has “cycled”.

    **Whether this is a good idea, I leave unstated. It will likely depend on the specifics of the situation, notably what side-effects the second drug has. However, when viewed in light of some arguments against steroids, the possibility must be considered. To e.g. try to scare someone away from steroids with the threat of gynecomastia without mentioning potential counter-measures is just unethical.

  7. A particular nefarious issue is the constant phrasing with “abuse”: Basically, any and all use of e.g. steroids is called “abuse” in a blanket manner. Good journalism should be impartial and stick to the facts. This includes using value-neutral words like “use” and not value-loaded words like “abuse”—no matter the journalist’s own opinions.

Of course, a side-effect of such propaganda is that we no longer know what we can or cannot trust: Is this-or-that recreational drug as dangerous as claimed? It might or might not be—but we are robbed the opportunity to learn this without doing time consuming research, because what is said in the media simply cannot be trusted.

In the bigger picture, I suspect that at least part of the problem is that some people come to the conclusion that something is evil, and take it upon themselves to prevent others from coming to a different conclusion through deliberate distortion of facts, demonizing something or someone, irrational emotional arguments, whatnot—they believe* that they have the truth and fear that others are not smart enough to find this truth, if left to their own devices. Indeed, this explains very well the apparent paradox that the surest way to be censored on a feminist blog is to comment with a strong counter-argument, a link to statistics contrary to the point of the original post, or otherwise doing something that could bring other readers away from the (often outrageously untrue) “truth”.

*The twist is, of course, that these people, more often than not, are less intelligent, less informed and more prejudiced, and worse at critical thinking than many or most of the people they try to “protect”. Unsurprisingly, they are also often wrong…

A good example of this is a group of anti-tobacco campaigners who visited my school class when I was some 10 to 12 years old: They started off trying to disgust the pupils away from snus, by discussing the potash content* and how potash was gathered for snus production through doing something** to the contents of chamber pots***… Now, snus is a nicotine product, it is addictive, it can cause health problems: These are all things that could, conceivably should, be told to school children and/or the public in general. Putting forth an absurdly wrong story in order to convince children through a shock effect is simply unethical, intellectually dishonest, and likely does more harm than good: When adults lie about one thing, how can children trust them on another? Why should they believe that snus is addictive, that this is not just another lie to scare them away? Etc.

*I seem to vaguely recall that even this claim was outdated, potash once having been an ingredient, but no longer being so. I could be wrong, however.

**I am a little vague on the details, especially since they simply did not make sense to me even then. (And, of course, the claim had nothing to do with reality, starting with the simple fact that chamber pots barely existed in Sweden at that time.) The story was so preposterous that it can be safely assumed that they were neither ignorant nor stupid enough to believe this themselves—it had to be a deliberate lie told to children in order to manipulate them.

***Surprisingly, the implied pseudo-etymology works almost as well in English as in Swedish: potash -> pottaska, chamber pot -> potta

Another example, which depending on developments might result in a separate post, is the recent claims of the German SPD that women would earn 79 cents on the euro—and, oh my, how unfair! I contacted them per email to complain and the answer (among a number of naive statements) showed that they actually, indisputably knew that any true difference was far smaller at, on the outside*, 5–8 % (i.e. 92–95 cents on the euro)—even using their own numbers. They are deliberately lying to their voters! See also e.g. my discussion of the 77 cents on the dollar and note the similarity of numbers over geography and time—this is exactly the kind of similarity that tends to indicate a biological (rather than e.g. a cultural or societal) variation.**

*Contrary to the beliefs of the SPD, an unexplained difference of 5–8 % does not mean that we have a systematic wage discrimination of 5–8 %—this interval is just an upper limit on the maximal size of any wage discrimination. Using studies with more factors, there is no reason to expect more than at most a marginal variation to remain. Interestingly, they also claim that while the West-German difference was 23 % (i.e. exactly the U.S. 77 cents), the East-German was a mere 8, which ties in well with some thoughts in my previous post. Note especially, this the eastern parts of Germany are still worse off than the western part and that there are still plenty of educational choices made and careers started during the GDR era.

**However, two data points does not make for any degree of certainty.

Written by michaeleriksson

August 26, 2017 at 7:11 pm