Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘computers

Further misadventures

leave a comment »

Warning: The following serves mostly as stress/tension release. With one thing and another, the Nazi series will likely see an interruption until late next week.

After my long period of problems (cf. earlier texts), things were really beginning to look up again, a day here-and-there with construction works notwithstanding. True, I had partially bought this improvement through slacking off, neglecting my writing and reading, letting the mail mount up again, and not, for now, getting to the bottom of a few outrages (notably, the inexcusable behavior of building management and the local chimney sweep, where I have put off very thorough complaints for over a year)*.

*If and when I get around to them, I might also write a few blog entries on the topic. Chances are that you will not believe me, because the situations are so utterly absurd. (And those complaints are on a very different level from the ones in this text.)

Still, things were locking up, I was beginning to get my energy back, writing was beginning to look good again, I was beginning to read heavier material, and I had the new project of researching emigration from Germany (which by now borders on being a Leftist dictatorship).

Then the screen of my newish computer just dies…

(Fortunately, just a few hours after the latest full backup. I have hopes that the issue will be repairable, as it might simply be a loose contact somewhere, but there is no guarantee, notebook repairs are often disproportionately expensive relative the original price, and there is not telling how long this might take.)

This, then, amounts to less than four months of use, while its predecessor might have worked for four years.

The next day, yesterday, I went to the local Mediamarkt to look for replacements*. Again, a bit of good luck among the bad—had this happened a little earlier, Mediamarkt might have been closed or off limits due to Covid restrictions.

*As shown both now and around New Year’s, notebooks are highly troublesome when something goes wrong, as the user has to start almost from scratch and he can be restricted in his work for days. With a desktop, I could usually just buy a new one and spend five minutes switching hard drives, while a mere monitor issue could be solved by just replacing the monitor. (Yes, notebook hard drives can also be replaced, but they are much trickier to access, might differ too much in size to fit in another notebook, and the driver situation can be trickier, which makes for more work post-replacement.)

I walked, as I always do, the few kilometers, but finding myself more tired and lacking in energy than I would have expected from such a distance. Too much time indoors due to a mixture of COVID-restrictions, low temperatures, and (during last summer) prolonged bronchitis have really damaged my fitness. (And then we have the question what this might imply for the future. If I fail to compensate through that much harder work, it might very well be a few years of my life when I am in my eighties.)

I walked around Mediamarkt, looking for suitable specimens, beginning, close at the entry to the second floor, with a set of marked-down-due-to-damage computers. Marked down? Maybe, but, apart from the Chromebooks, they were still more expensive than I cared for, and I had the slight fear that some mixture of stampeding inflation, bottlenecks for various chips and whatnots, and market segmentation* would make the affair far more expensive than intended.**

*E.g. in that only Chromebooks and various Android devices can be had cheaply, while a “grown up” computer goes for massively more. Chromebooks et al, however, are not suitable for my current purposes.

**A few years back, I wrote about an atypical lack of progress or even regression in terms of bang-for-buck when it comes to computers. At that time, the decades long trend towards ever more bang for the buck was temporarily broken. A reason for this might have been the vastly increased demand for smartphone components.

I walked over to where the regular items were found, easy to spot and well displayed—and so expensive that I could not believe my eyes. The cheapest (!) went for 900-something Euro, while the median might have been in excess of 1200.* As a comparison, my first notebook, bought in, maybe, 2000, went for less than 3000 DM or around 1500 Euro. (Yes, this was a low-end specimen and there have been more than twenty years of inflation—but there have also been more than twenty years of technological progress.)

*Reservation: I go by memory and did not take exact mental notes. The general idea holds, even should I have the details wrong.

Then, the day seemed saved: in a much less visible aisle, I found a handful of notebooks at much more moderate prices, of which I picked two (of different models) for a total of less than 900 Euro. These, while low-end, were even of better bang-for-buck than the last time around.*

*Which points to the broken trend having resumed in the interim. Imagine my relief.

Sadly, the year is 2022 and the notebooks still all came with a useless and price-increasing Windows installation. Again: 2022—not 2002. This shit should be long behind us.

I went back home on foot and, having a bad conscience about my fallen endurance, picked a road a little longer and much hillier. (Wuppertal has no end on hills, if one picks the right or, depending on perspective, wrong path.) The result: For the first time in years, even with my building in sight, I had to stop to get my breath back—and the last time around I had a loud both heavier and more awkward to carry.* Halfway up to the third floor, I had to halt again—also for the first time since that heavier-and-more-awkward load. Once in my apartment, I put my notebooks down, kicked off my shoes, dropped my jacket to the floor, too tired to hang it, and then I laid down on the floor, myself, where I spent several minutes. Now, I am not saying that this day would have been an outright pick-nick in the past, but… Two years ago, I would neither have had to stop, nor would I have found myself on the floor afterwards—and I suspect that I would have held a higher average tempo.

*The sum of bag and notebooks might have been around 5 kg, maybe less. (Weight is an area where there really has been progress.) To my very vague recollection, the prior event, involving furniture, might have been at 16 kg, but, in all fairness, over a shorter and flatter course.

After a brief excursion, at snails-pace-by-my-standards, to buy food, I spent most of the remainder of the day feeling really lousy, as I do after an overexertion. The intended high point of the day was a Tex-Mex pizza from the local store (one of my favorite dishes). I put it off until the evening—delayed gratification and all that. Twenty-five minutes in the oven, as I like the pizza crispy and firm, and it should have been good to go. But no. I tried to fish it out onto a teller with a fork, as I always do and which has never failed me in the past. This time, very unfirm dough split around the fork tines and the pizza landed in a heap on the oven lid. I tried to grab the heap with the fork and a hand, and the fork just went through it again, leaving nothing that could reasonably be eaten.*

*I do not know what the problem was. A possibility is that I had not turned one of the knobs far enough, but, if so, I should have noticed it when I turned off the oven—as I always have on the very few prior occasions when a knob has been short of the mark.

Today, I began the installation of Linux (Gentoo). Here things grew tiresome again. For starters, I had, around New Year’s, downloaded the installation manual* to an e-reader—which should be perfect right now. But no. When I opened the document, the font was on the small side, so I picked a larger one. The result: The reader locked up in “hour-glass mode” for so long that it went into power-save** mode before the document had reloaded. Once done, simply going from page 2 to page 3 caused another massive delay, after which the power-save mode was reactivated. After turning the thing on again, I was still on page 2… After several repetitions, I tried to go back in size, as things had worked to begin with. The reader worked for half an eternity, went into power-save mode—and was, surprise, still using the larger font afterwards. Several repetitions brought no improvement.

*Note that Gentoo is a distribution for somewhat more proficient users, and that there is a lot more manual work and own decisions to make than with e.g. Debian.

**Due to the minutes of waiting. The battery, to avoid misunderstandings, was fully loaded.

I gave up and began the installation on the first notebook. Just as I recalled, the installation medium did not contain the installation instructions (a bizarre choice, especially considering how little space would be needed), and the central “man” command for displaying other documentation was equally missing. Fortunately, “cryptsetup” was present and I could mount my (encrypted) backup drive, where I, among other needed things, did have a copy of the installation guide. From here on, things went much smoother than around New Year’s; in part, because I had some experience; in part, because I could forego a number of steps and just populate most of the hard drive from the backup drive. However, there were still quite a few curses, due to the incompatibilities of the defaults in the installation shell and my own ingrained-in-my-fingers preferences. Some obscure errors held me back for a while, because I had forgotten to manually add a “/tmp” directory (which I do not backup), including that tmux refused to start.*

*And why is it so hard to give decent error messages? Pretty much the first rule of writing error messages is to indicate what object caused the problem. It should never be e.g. “File not found!”, but “File XYZ not found!”. Ditto, never “Access denied!” but “Access to XYZ denied!” (unless the object is obvious from the interaction).

Still, apart from some issue with the sound,* I had a working computer and a working Internet much, much faster than last time around—and i decided to carry on with the second, too, today. (Originally, intended for tomorrow.)

*A missing driver, likely. I will look into that later. The device is found by “lspci” but is not in e.g. the “/dev” tree.

But no. While it definitely has a functioning hard drive, as it managed to boot into the pre-installed Windows when I was a little slow with entering the BIOS. However, when I tried the Gentoo installation, this hard drive simply could not be found. There is not even a “/dev” entry. If I have the energy, I will troubleshoot tomorrow, but in a worst case, I might have to replace the boot-image for the installation. Absurd. Again, the year is 2022 and interfaces should be sufficiently standardized that something like that simply cannot happen.

(I also have some misgivings about the keyboard layout. As I noticed during my brief experiments, a few keys had been moved out of position in a manner that could be extremely annoying to the touch typist. Another first rule, and another one all too often violated—if you design keyboards, keep the touch typist in mind, not just the hunt-and-peck typist.)

Advertisement

Written by michaeleriksson

April 23, 2022 at 9:52 pm

Follow-up: Revisiting verbosity based on Wesnoth

leave a comment »

A minor unfairness in my recent text on Wesnoth was the use of an outdated version (1.12), instead of the latest (1.14).

I have since installed and briefly tried 1.14 through use of Debian’s “backports”. My experiments were cut short by the fact that it was unplayable on my computer for performance reasons. Specifically, the game used up an entire processor core even when doing nothing* and was slow as molasses—click somewhere and the reaction took place ten seconds later…**

*Including when being on the start screen and when having all animations deactivated.

**Note that this is a different type of delay than discussed in the original text: There we had artificial delays deliberately introduced by misguided developers; here we have delays as a side-effect of a too resource-consuming application.

If (as it likely was*) this performance delay was caused by changes to Wesnoth it self, it demonstrates a disastrous attitude so common in today’s software—the assumption that any type of performance waste in order to gain minimal benefits is acceptable. Notably, nothing that this game does should cause such performance delays. That the display part can be handled much more efficiently is proved by older versions of the game, the amount of calculation needed during user interaction is negligible, and the observed delays were by no means restricted to the AI’s/computer’s play**. And, no, neither 1.14 nor the older versions use graphics of an impressive kind—strictly 2D, a mostly static map (especially with animations off), no fanciful textures, …***

*It is unlikely-but-conceivable that some version incompatibilities or other problems in/with/between the simultaneously backported libraries and/or the original setup led to problems that are not the fault of Wesnoth. (The backports are typically nowhere near as thoroughly tested and come with far less guarantees than the “regular” Debian packages.)

**If it had been, it might be explained by deep analysis, simulation, or whatnot preceding the AI’s decisions. However, considering how trivial and fast the AI had been in older incarnations, this would have been a surprising development.

***Which is good: There is no true benefit to be found from such features in a game of this type. Equally, chess is played just as well with a simple wooden board and wooden pieces as if diamonds and rubies were used—likely, better.

I did manage some minor comparisons with 1.12 before giving up, however, and must amend my original criticism slightly: The blend-in of text and movement-to-and-from-a-war-council in 1.12 were not as slow as I perceived them when writing the original text. (However, both are still entirely unnecessary delays. Also note that at least the latter is sped up by a factor of four in my settings, compared to the default.) Here we likely see an effect of different standards of comparison: During regular play, I am used to things happening very quickly; coming straight from experiences with 1.14, I saw the contrast to the truly slow.

(As for differences, they appeared to be mostly optical. However, I did not manage to do any non-trivial game play nor test some of the features at all or more than very superficially, and there might have been significant changes that I am not aware of.)

Excursion on performance:
It is true that processing power is quite abundant today, with even a low-end cell-phone comparing well to the first computers that I knew. However, it is still not a limitless resource; not all computers have top-end processors (graphics cards, whatnot), notably because even an outdated low-end notebook can handle almost any task with ease;* and there are still factors like environmental impact and battery life to consider when a heavy workload increases energy consumption. In some case, even heat might be a factor to consider—what if a game is not playable on a hot summers day?

*For instance, my current notebook has a quadcore processor topping out at 2 GHz and only “on-board” graphics—and that is more than enough to watch even HD movies, play older versions of Wesnoth, listen to music, browse the Web, etc. Indeed, the average load on my processor is often below 1 %…

It is also noteworthy that some of this computing is a complete waste, e.g. because (as with 1.14) there is no benefit to it. Often, it is based on faulty assumptions about what the user wants.* Often, it is a pseudo-optimization; sometimes, it is even contra-productive.** It is particularly infuriating when an idle application (as with 1.14 above) runs the processor up—by rights, it should have a negligible processor load. Even as a professional software developer, I have problems understanding how they manage the opposite: Is it amateurish “busy waiting”? Is it a polling multiple times per second for an event that takes place every few hours? Is it a hidden malware that tries to spy on me? Is it an equally hidden bitcoin-mining operation? …

*For instance, many applications leave background processes hanging around after they are (ostensibly) terminated. This is usually with the intent that the application should start faster the second, third, fourth, … time around—but what if this does not happen for several weeks or only after a re-boot? Worse, there are some applications (especially on Windows machines) that insist on starting such background processes before the first application start. What if the actual application is then never started?

**A good potential example of contra-productive “optimization” is the “save_index” file of Wesnoth: Interacting with the saves is usually faster after deleting it… (I have not studied its exact purpose, but based on the name, it is likely intended to speed up said interaction. I note that I have never experienced any negative side-effects of the deletion.) And, no, this is not a unique example: I first deleted this file because I knew that emptying a cache (or a similar measure) often had led to a speed-up in other applications and thought that I should at least give it a try. Indeed, while caches can have major benefits in the right situation, they are often more a hindrance than a help. For instance, it only rarely makes sense for an application to add its own file caching on top of the file caching done by the operating system and other mechanisms. (And the introduction of SSDs have created many situations where the value of any file caching is strongly reduced compared to the past.)

Written by michaeleriksson

December 2, 2018 at 1:39 am

Of Mice and Computer Users

with 2 comments

There are days when I can barely suppress the suspicion that life is a weird cosmic joke, “Truman Show”, or scientific experiment that replaces mice in a labyrinth with humans in a Matrix.

Today is one of those days: I had cleared almost every task of my schedule in order to dedicate myself to the tax declaration for 2017—after having postponed it again and again for the last two weeks and knowing that it would likely leave me in too poor a mood to risk anything else that could aggravate me. (Cf. a number of earlier texts, e.g. [1].)

Then a server crashed, which I had had running for several weeks without problems. I rebooted it, started things up again, and decided to do some minor clean-up while still logged in. (Should have stuck to the plan…) In the process, I (very, very unusually) managed to screw-up a command, leading to all the files in the user account being moved. While I discovered this immediately after submitting the command, I could not interrupt it, because my session froze… After a minute-or-so of waiting, I forced a new reboot.

Less than thrilled, I proceeded to clean up the damage (and fortunately, no irrecoverable damage took place)—only to now have my notebook complain about a CPU being stuck and eventually freezing, forcing a reboot of the notebook… Only yesterday, I had noted an up-time of roughly sixty days—today, right in this already annoying situation, it fails! Worse, after the reboot, after I have got everything* back up again, the notebook just crashes. Roughly sixty days without problems and then two forced reboots in twenty minutes. Worse yet, I next decided to use the latest installed kernel,** seeing that I trailed heavily in version, and that newer kernels are usually better—and found myself needing yet another reboot within five minutes…

*With a number of different user accounts, different encryption passwords, and whatnot, this takes a lot more time for me than for the average user. Normally, this is not a problem, because I only need to reboot every few months. When I have multiple reboots in a single day, the situation is very, very different.

**At some point, the newest release became unstable with my notebook, and I set up my boot-loader to use an older, stable kernel per default. However, that was at least six months ago, running an older kernel is a potential security risk, and I had hoped that the current newest release would have resolved these problems in the interim. Unfortunately, this was not the case. Switching to a newer kernel in the above situation was, admittedly and with hindsight, pushing my luck; however, if I had not tried it today, the next “natural” opportunity might have been another sixty days into the future. (Indeed, going by my existing plan, I should have switched kernels already for the first reboot, but simply did not remember to do so until the third attempt.)

As of now, I have an up-time of a little more than four hours (back with the old kernel), and hope for another sixty-ish days. However: Half the day has been wasted between the extra efforts and the time needed to restore my mode—and I am not taking the risk of attempting the tax declaration today, lest things end with a notebook that crashes in a more literal sense (say, into the nearest wall).

Written by michaeleriksson

October 25, 2018 at 3:38 pm

Detection of manipulation of digital evidence / Follow-up: A few points concerning the movie “Anon”

leave a comment »

In a recent discussion of the movie “Anon”, I noted, regarding the uselessness of digital evidence, “Whatever is stored […] can be manipulated”, with a footnote on the limitations of write-only storage (an obvious objection to this claim).

A probably more interesting take than write-only storage is the ability to detect manipulation (or accidental change). Here there are many instances where some degree of protection can be added, say, a check digit or a check sum for an identifier (e.g. a credit-card number) respectively a larger piece of content (e.g. an executable file), cryptographic verification of extended change history in a version-control system (notably Git), or any number of Blockchain applications (originating with Bitcoin). The more advanced uses, including Blockchains, could very well be legitimately relevant even in a court of law in some cases.

In most cases, however, these are unlikely to be helpful—starting with the obvious observation that they only help when used during the manipulation, which (today and for the foreseeable future) will rarely be the case.* Worse, the victim of a manipulation will also need to convince the court that e.g. the planted evidence would necessarily have been covered by such verification mechanisms: Consider e.g. someone who meticulously keeps all his files under version control, but where incriminating evidence is planted outside of it. He can, obviously, claim that any file or change of a file actually owned by him would have been registered in version control. However, how can he prove this claim? How does he defeat the (not at all implausible) counter that he kept all his regular files in version control, but that these specific files were left outside due to their incriminating character, in an attempt to hide them from a search by a third-party?

*I note e.g. that the technologies are partly unripe; that the extra effort would often be disproportionate; and that a use sufficiently sophisticated to be helpful against hostile law enforcement might require compromises, e.g. to the ability to permanently delete incriminating content, that could backfire severely. In a worst case scenario, the use of such could it self lead to acts that are considered illegal. For instance, assume that someone inadvertently visits a site with a type of pornography illegal in his own jurisdiction, that the contents are cached by the browser, at some point automatically stored in a file-system cache, and that all contents stored in the file system are tracked in such detail that the contents can be retrieved at any future date. Alternatively, consider the same example with contents legal in his jurisdiction, followed by travel with the same computer to a jurisdiction where those contents are illegal. Note that some jurisdictions consider even the presence in a browser cache, even unbeknownst to the user, enough for “possession” to apply; by analogy, this would be virtually guaranteed to extend to the permanent storage discussed here. (This example also points to another practical complication: This type of tracking would currently be prohibitive in terms of disk space for many applications.)

Even when such measures are used and evidence is planted within their purview, however, it is not a given that they will help. Consider (for an unrealistically trivial example) a credit-card number, where a single (non-check) digit has been manipulated. A comparison with the check digit will* make it clear that a manipulation has taken place. However, nothing prevents the manipulator from recalculating the check digit… Unless the original check digit had somehow been made public knowledge in advance, or could otherwise be proved, the victim would have no benefit in a court of law. Indeed, he, himself, might now be unaware of the manipulation. The same principle can be used in more advanced/realistic scenarios, e.g. with a Git repository: While a naive manipulation is detectable, a more sophisticated one, actually taking the verification mechanisms into consideration, need not be. In doubt, a sophisticated manipulator could resort to simply “replaying” all the changes to the repository into a fresh one, making sure that the only deviation in content is the intended.** If older copies are publicly known, deviations might still be detected by comparison—but how many private repositories are publicly known?*** The victim might still try to point to differences through a comparison with a private backup, but (a) the manipulator can always claim that the backup has been manipulated by the victim, (b) it is not a given that he still has access to his backups (seeing that they are reasonably likely to have been confiscated at the same time as the computer where the repository resides).

*With reservations for some exceptional case. Note that changing more than one digit definitely introduces a risk that the check digit will match through coincidence. (It being intended as a minor precaution against accidental errors.)

**Counter-measures like using time stamps, mac addresses, some asymmetric-key transfer of knowledge to identify users, …, as input into the calculations of hashes and whatnots can be used to reduce this problem. However, combining a sufficiently sophisticated attacker with sufficient knowledge, even this is not an insurmountable obstacle. Notably, as long as we speak of a repository (or ledger, Blockchain, whatnot) that is only ever used from the computer(s) of one person, chances are that all information needed, including private keys, actually would be known to the manipulator—e.g. because he works for law-enforcement and has the computer running right in front of him.

***In contrast, many or most Git repositories used in software development (the context in which Git originated) will exist in various copies that are continually synchronized with each other. Here a manipulation, e.g. to try to blame someone else for a costly bug or to remove a historical record of a copyright violation, would be far easier to prove. (But then again, we might not need a verification mechanism for that—it would often be enough to just compare contents.)

Worse: All counter-measures might turn out to be futile with manipulations that do not try to falsify the past. Consider some type of verification system that allows the addition of new data (events, objects, whatnot) and verifies the history of that data. (This will likely be the most typical case.) It might now be possible to verify that a certain piece of data was or was not present at a given time in the past—but there is no automatic protection against the addition of new data here and now. For instance, a hostile with system access could just* as easily plant evidence in e.g. a version-control system (by simply creating a new file through the standard commands of the version-control system), as he can by creating a new file in the file system.

*Assuming, obviously, that he has taken the time to learn how the victim used his system, which should be assumed if someone becomes a high-priority target of a competent law-enforcement or intelligence agency.

Then we have complications like technical skills, actual access to the evidence, and similar: If digital evidence has been planted and a sufficiently skilled investigator looked at the details, possibly including comparisons with backups, he might find enough discrepancies to reveal the manipulation. However, there is no guarantee that the victim of the manipulations has these skills*, can find and afford a technical consultant and expert witness, has access to relevant evidence (cf. above), … To take another trivial and unrealistic example: Assume that a manipulating police employee adds a new file into the file system after a computer has been confiscated. Before court, testimony is given of the presence of the file, even giving screen shots** verifying the name, position, and contents of the file—but not the time stamp***! With sufficient access and knowledge, the defense could have demonstrated that the time stamp indicated a creation after the confiscation; without, it has nothing—no matter what mechanisms were theoretically available.

*And even when he has these skills himself, he would likely still need an expert witness to speak on his behalf, because others might assume that his technical statements are deliberate lies (or be unwilling to accept his own expertise as sufficiently strong).

**I am honestly uncertain how this would be done in practice. With minor restrictions, the same would apply even if the computer was run physically in the court room, however. (But I do note that screen shots, too, can be manipulated or otherwise faked, making any indirect evidence even less valuable.)

***Here the triviality of the example comes in. For instance, even many or most laymen do know that files have time stamps; the timestamp too could have been manipulated; if the computer was brought into the court room, the defense could just have requested that the time stamp be displayed; … In a more realistic example, the situation could be very different.

Excursion on auditing:
Some of these problems could be reduced through various forms of more detailed user auditing, to see exactly who did what and when. This, however, runs into a similar set of problems, including that such auditing is (at least for now) massive overkill for most computer uses, that auditing might not always be wanted, and that the auditing trail can it self be vulnerable to manipulation*. To boot, if a hostile has gained access to the victim’s user account(s), auditing might not be very helpful to begin with: It might tell us that the user account John.Smith deleted a certain file at a certain time—but it will not tell us whether the physical person John Smith did so. It could equally be someone who has stolen his credentials or otherwise invaded the account (e.g. in the form of a Bundestrojaner).

*To reduce the risk of manipulation, many current users of auditing store audit information on a separate computer/server. This helps when the circumstances are sufficiently controlled. However, when both computers have been confiscated, the circumstances are no longer controlled. To boot, such a solution would be a definite luxury for the vast majority of private computer users.

Excursion on naive over-reliance in the other direction:
Another danger with digital evidence (in the form discussed above or more generally) is that a too great confidence in it could allow skilled criminals to go free, through manipulation of their own data. A good fictional example of this is given in Stephen R. Donaldson’s “Gap Cycle”, where the (believed to be impossible) manipulation of “datacores”* allows one of the characters to get away with horrifying crimes. Real-life examples could include an analogous manipulation of tachographs or auditing systems, if these were given sufficient credibility in court.

*The in-universe name for an “append-only” data store, which plays a similar (but more complex and pervasive) role to current tachographs in tracking the actions taken by a space ship and its crew.

Excursion on digital devices in general:
Above I deal with computers. This partly, because “traditional” computers form the historical main case; partly, because most digital devices, e.g. smart-phones, formally are computers, making it easier use “computer” than some other term. However, the same principles and much of the details apply even with a broader discussion—and for a very large and rising proportion of the population, smart-phones might be more relevant than traditional computers.

Written by michaeleriksson

July 11, 2018 at 2:34 am

The horrors of October 31st

leave a comment »

This October 31st we have that yearly horror of my current client’s, that thing that has the employees groaning and wishing they could be somewhere else, that most dreaded part of the year.

No, not Halloween: The deadline for the annual security awareness training.

There is so much wrong with it that I hardly know where to begin—and honestly doubt that I will manage to remember all issues. To give it a try:

  1. In order to complete the training, an online course, it is necessary to use a Flash* program/lecture/presentation/interactive course/whatnot loaded over an external** website.

    Pause right there: It is necessary to use a FLASH program from an EXTERNAL website—in order to take a SECURITY course.

    In other words, the greatest single endangerment of my work computer and my clients internal network that I am involved with in the course of the year is the security course…

    As one of the colleagues remarked, he actually considered the possibility that the course was some form of test: Refuse to take it and complain to the security officer—automatic pass. Take the course—automatic fail.

    *Writing this, I contemplate the minor possibility that the course might have been re-written to use some variation of HTML5 and JavaScript, although it still felt and acted like Flash—unlikely, but possible, and I mention it for the sake of fairness: Last year, I definitely had to take actions to re-activate Flash and to grant it access to the sound system, things I have de-activated as a matter of course. This year, I did not. It could be a re-write, it could be that some automatic update had re-activated/-reset Flash. (Something that has happened repeatedly in the past with this client.) I also have JavaScript deactivated, as a matter of course, but since I deliberately switched from Firefox to IE for the duration of the course, the fact that I did not have to reactivate JavaScript means nothing.

    **I am unaware of the actual authorship of the program and to what degree the client is able to control contents. However, the contents are most definitely from an external website, implying that even if the program was non-malicious to begin with, there is no guarantee that it still was so at the time of the download. Of course, Flash is well-known as one of the greatest security horrors, with the most vulnerabilities, of any web-based technology. It is no coincidence that even those who once were hailing it as the future are now distancing themselves, nor that future developments will not take place: Earlier this year, Adobe, the maker of Flash, announced its end-of-life.

  2. The presentation is poorly made, with many unnecessary moving objects, artificial and droning voices, and other annoyances and distractions. The general format is similar to a PowerPoint-style presentation: Imagine someone being given an extensive introduction into the various features of such a presentation program—but not one word on how to make a good presentation in any non-technical regard. Imagine this someone, as such people often do, go nuts with using any feature available without any regard for anything but feature use. That type of presentation is the equivalent of this course.

    Why a presentation style course instead of possibly two pages of text and a questionnaire to begin with? Beats me…

  3. Most of the contents are too trivial to keep a computer professional out of boredom or to teach him anything really useful. On the outside, different courses for different target groups, with different skill levels, should have been provided. I, e.g., have read several books and many articles on various topics related to computer security, including two by the infamous Kevin Mitnick on social engineering. What do I gain from being shown one or two presentation slides that amount to “watch out for social engineering”? Nothing: Either I already have a certain knowledge and understanding or I do not. In doubt, chances are that I would be better qualified to hold a course in computer security for the makers of this course, than they are to hold one for me…

    (A partial explanation might be that the keyword is not so much “security” as “awareness”: The intention is likely less to educate people about security and more to remind them of the importance, which also explains why what amounts to the same course is mandatory every year, rather than just once. This is to some degree something that can be of value even to those with a considerably above average knowledge. It is also something that could be done much, much more efficiently and effectively, and without boring the “students” to tears.)

    To boot, the general level of the course is truly for the “lowest common denominator”, suitable for high-school drop-outs, and extremely condescending: Let’s see if you can help Mary avoid phishing! I can only be thankful that this was not a course on dogs or English: See Spot run…

  4. Considering the low amount of actual content, the course is much too long*, especially since there is a boredom factor, with the ensuing lack of concentration—and I repeatedly caught myself drifting off to the point that I had missed was what said. Cutting it down considerably would have resulted in something with greater educational value (for those weaker in knowledge) for the simple reason that they would be that much more focused. For those already knowledgeable, it would have shortened the pain.

    *I did not time my effort and also paused the course several times to answer questions concerning/suggest solutions for a work problem—as well as getting at least two cups of coffee. However, in a guesstimate, the actual “course time” might have been around two hours. At any rate, even materials for a beginner should have been coverable at, say, three times the tempo used; for those knowledgeable, with less material needed, there was likely less than five minutes worth of content…

  5. Interactive questions: The progress checking takes the form of a number of multiple-choice and match-left-item-to-right-item style questions to answer. Most of these are fairly useless and/or can be answered without taking the course based on common sense and an ability to guess what type of answer this type of test maker wants to hear. (The reader might recognize the latter part from high school or some social-science course in college.) This to the point that several questions are of the type “Which of these items are dangerous?”—with the correct answer “all”.

    At the same time, some require actually deliberately giving a wrong answer, because there is no logic or insight behind many of them, merely a mechanical comparison to earlier examples. Notably, I needed three* tries to answer a matching question for the simple reason that I matched the label “quid-pro-quo” to an example actually containing a quid-pro-quo… Unfortunately, the test makers did not follow the normal meaning of “something-for-something” in a trade/barter situation (where, for all I care, one of the parties might be dishonest), but instead intended something along the lines of “pretending to offer something so that someone else unwittingly would give up something valuable” (specifically, “pretend to want help you with your computer so that you thoughtlessly give access to it”)—something incidentally matching the normal meaning of another item, “pretexting”, very well… The intended match for “pretexting”, in turn, had very little to do with the normal meaning of “pretending to want something in the hope of actually getting something else” or “using the claim of wanting something as an excuse for an action with a different agenda”, but instead referred to a social-engineering practice of pretending to know something/being someone, or offering a bit of known information, in the hope of learning something new that could later be used for further infiltration.**

    *Multiple tries are allowed, which reduces the insight needed even further, especially with the low number of possible answers. However, rumor had it that there is a three-strikes limit, and I did grow a bit nervous there. Specifically, I got the first try wrong due to quid-pro-quo and, not even reflecting on the possibility that that could be the issue, I just turned two other matches around, and failed again (because quid-pro-quo was still in the “wrong” match.)

    **Disclaimer: I go by memory here, not having access to the actual questions at the moment. It is conceivable that my details are off—but not the overall principle.

    In such cases, it might actually be an advantage in not being a sharp thinker and not having much prior knowledge. Notably, someone who lacked an understanding of quid-pro-quo (e.g. a high-school drop-out…) might just go blindly by the examples to begin with, and get it “right” in one attempt.

    To my recollection, I had one other answered turned down: In a “chose all things on this computer desktop where secrecy is needed” (or similar) scenario, I reasoned that the test makers probably wanted to see the icon for MS Word included, seeing that careless use of MS Word can be a confidentiality issue*. They did not: They argued that MS Word is a program and, unlike data, is not a matter of confidentiality. This actually matches my own opinion, but it was also a distinction that I had judged to be beyond the intellectual horizon of someone engaging in such extreme dumbing down. In other words, the format and “stupid” questions, which moved the test taker to not give the right answers, but the “right” ones, back-fired on me.

    *Notably, the totality of the information present is not necessarily equal to what can be read in the document, due to meta-information, “track changes”, comments, and possibly some other mechanisms. Say that the sender of a document has the display of “track changes” turned off, the recipient turned on, and that the changes contain confidential data (or e.g. derogatory remarks).

  6. Some of the items take an attitude which is practically unrealistic or too focused on the security aspect. For instance, one question described a situation where someone dropped a report of some type near a fax machine, despite this type of report normally only being sent by email: Guessing the intentions of the test makers correctly, I opted to keep quite in the moment and bring the issue to the immediate attention of HR. In theory, this might be a good idea. In real life? Probably a very bad idea: There is an undue risk for both me and the other party, in that I could be seen as paranoid, untrusting, or unfriendly (especially if details got out or I had bad luck with the who-knows-whom), and the other party might see his reputation hurt by unfair suspicions—bear in mind that most instances of suspect behavior actually have a non-malicious explanation. Left to my own devices, I would probably have just asked for an explanation, feigning casually curiosity*. Depending on what that explanation ended up being (possibly including factors like delivery), I might or might not have talked to HR or made some alternate research. For instance, if the answer was “Bob is stuck with a dead lap-top battery and needs the report urgently for a customer negotiation”, I would have pretended to take it at face value—and at first opportunity, again casually, brought the topic up with Bob. Now, if Bob had a different story, then I would have talked to HR**.

    *As opposed to the “confront” alternative given among the multiple choices.

    **Or someone like the security officer, the other party’s boss, whatnot. Depending on company culture, regulation, and the individuals involved, HR is not necessarily the best starting point—nor even necessarily a good one. In fact, I suspect that a partial reason why HR was the “right” answer is that going to HR puts the employer in full charge of the process, which might be preferred for reasons unrelated to security topics (but is not automatically in the best interest of the other parties involved). From another point of view, many people in corporate hierarchies see themselves as necessarily smarter, having better judgment, being better educated, whatnot, than those theoretically lower in the hierarchy. This might be true when most of the employees are e.g. uneducated factor floor workers or clerks. In my field of work and during my career, a Master’s degree in a STEM subject has been the norm, and the situation is correspondingly very, very different. (Admittedly, this is changing for the worse over time.)

  7. Many highly needed pieces of advice (to the uninformed) are left out, notably safe-surfing tips like “make sure that Flash and JavaScript are deactivated per default”…
  8. Technical problems: At least two colleagues have complained about program interruptions and state not being saved, forcing them to start over—with something that was a chore the first time around. I suffered a “website not responding” scare my self, but program execution resumed shortly after.
  9. Political correctness: There are plenty of images of people (none of them adding any value). To my recollection, only one features a white man: An image of a disgruntled employee, out to do harm to his employer, sadly hunched over his computer, face hidden. The rest were women, various non-Whites, or both—all smiling, happy, beautiful.

    (It is saddening that this topic pops up even in a context where it should be entirely irrelevant.)

Written by michaeleriksson

October 28, 2017 at 4:34 pm