Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘Personal

Follow-up: Dropping the ball on version control / Importing snapshots into Subversion

with one comment

As a follow-up on yesterday’s text ([1]) on adventures with version control:

  1. When I speak of commands like “svn add”, it is implied that the right filename (filenames, directory name[s], or whatever applies) is given as an argument. I should have been clearer about that in the original text. Depending on the exact command, a missing argument will either lead to an error message or an implied default argument. The latter might, in turn, lead to highly unexpected/unwanted results.
  2. During my imports, I did not consider the issue of the executable bit. In my past experiences, Subversion has not necessarily and automatically applied it correctly, forcing manual intervention. As it happens, this time around, it was correctly applied to all executable files that I had, which might or might not point to an improved behavior. However, had I thought ahead on this issue, I might* have complemented any “svn add” with an “svn propset svn:executable ON” when the file to be added was executable, and anyone writing a more “serious” script should consider the option. (Ditto with the manual addition of an executable file.)

    *In the spirit of “perfect is the enemy of good”, cf. [1], and noting that the script was never intended to be used again after the imports: Would it be more or less effort to improve the script or to just do a one-off manual correction at the end of the imports? (Or, as case might have had it, a one-off manual correction after I tried to run a particular file and was met with an error message?)

  3. Something similar might apply to other properties. Notably, non-text* files are given an svn:mime-type property in an automatic-but-fallible manner. Checking the few cases that are relevant for my newly imported files, I find three files, a correctly typed PDF file (accidentally included in the repository, cf. [1]), a correctly typed JPEG image (deliberately included), and a seemingly incorrectly typed tarball (accidentally included).**

    *Situations might exist where MIME/media types are wanted for text files too. These, too, would then need manual intervention.

    **Gratifyingly, there has been no attempt to mark a text file as non-text, an otherwise common problem. See [2] for more on this and some related topics.

    Seemingly? Looking closer at the tarball, it turns out that the tarball was broken, which limits what Subversion could reasonably have done. My checks comprised “tar -xf” (extract contents), “tar -tf” (list contents), and “file” (a command to guess at the nature of a file) all of which were virtual no-ops in this case. (Why the tarball is broken is impossible to say after these few years.)

    However, the general idea holds: Subversion is not and cannot be all-knowing, there are bound to be both files that it cannot classify and files that it classifies incorrectly, and a manual check/intervention can make great sense for more obscure formats. Note that different file formats can use the same file extension, that checking for the details of the contents of a file is only helpful with enough knowledge* of the right file formats (and, beyond a very cursory inspection, might be too costly, as commits should be swift), that new formats are continually developed, and that old formats might be forgotten in due time.

    *Not necessarily in terms of own knowledge. I have not researched how Subversion makes its checks, but I suspect that any non-trivial check relies on external libraries or a tool like the aformentioned “file”. Such external libraries and tools cannot be all-knowing either, however.

  4. An interesting issue is to what degree the use of version control, some specific version-control tool, and/or some specific tool (in general) can affect the way someone works and when/whether this is a bad thing. (Especially interesting from a laziness perspective, as discussed in [1].) The original text already contains some hints at this in an excursion, but with examples where a changed behavior through version control would have involved little or no extra effort. But consider again the moving/renaming of files and how use of Subversion might lead to fewer such actions:

    Firstly, a mere “mv” would be turned into a sequence of “svn move”, “svn status” (optional, but often recommendable), and “svn commit -m” with a suitable commit message. Depending on the details, an “svn update” might be needed before the commit.* Moreover, some manual pre-commit check might be needed to avoid a malfunction somewhere: with no repository, it does not matter when the issue is corrected; with a repository, it is better to avoid a sequence of commits like “Changed blah blah.”, “Fixed error in blah blah introduced in the previous commit.”, “Fixed faulty attempt to fix error in blah blah.” by making sure that everything is in order before the first commit. Obviously, this can lead to a greater overhead and be a deterrent.**

    *I have a sketchy record at predicting when Subversion requires this, and am sometimes caught by an error message when I attempt the commit. (Which leads to a further delay and some curses.) However, many (most? all?) cases of my recent imports seemed to involve a sequence of “svn mkdir” to create a new directory and “svn move” to move something into that directory. Going by memory from the days of yore, simultaneous changes of file position and file contents might lead to a similar issue. (Both cases, barring some tangible reason of which I am unaware, point to a less-than-ideal internal working of Subversion.) For use with multiple workspaces, say in a collaboration, a change of the same file(s) in the repository by someone else/from another workspace is a more understandable case.

    **As a counterpoint, it can also lead to a more professional approach and a net gain in a bigger picture, but that is irrelevant to the issue at hand, namely, whether use of Subversion can lead to fewer moves/renames. The same applies to the “Secondly”.

    Secondly, without version control, it does not matter in what order changes to data take place—begin to edit the text in some regard, move it, continue the ongoing edit.* The same might be possible with version control, but a “clean” approach would require that the move and the edit are kept apart; and either the edit must be completed and committed before the move, or the current state of the edit pushed aside, the move made and committed, the edit restored, and only then the edit continued. Again, this increases overhead and can be a deterrent. It can also lead to moves being postponed and, witness [1], postponing actions can be a danger in its own right.

    *Here I intend an edit and a move that are independent of each other—unlike e.g. the issue of renaming a Java class mentioned in [1], where the two belong together.

    (With older version-control systems, e.g. CVS, there is also the possibility that the move terminates the history of the old file and creates a new history for the new file, as there is no “true” move command, just “syntactic sugar” over a copy+delete. History-wise, this is still better than no version control, but the wish not to lose history might be an additional deterrent.)

  5. Subversion has some degree of configurability, including the possibility to add “hooks” that are automatically executed at various times. I have no greater experiences with these, as they are normally more an admin task than a user task, but I suspect that some of what I might (here or in [1]) refer to as manual this-or-that can be done by e.g. hooks instead.
  6. Remembering to write “an svn […]”, instead of “a svn […]” is hard.

Written by michaeleriksson

March 24, 2023 at 5:01 pm

Dropping the ball on version control / Importing snapshots into Subversion

with 2 comments

Unfortunately, computer stupidities are not limited to the ignorant or the stupid—they can also include those lazy, overly optimistic, too pressed for time, whatnot.

A particularly interesting example is my own use of version control:

I am a great believer in version control, I have worked with several* such systems in my professional life, I cannot recall the last time that I worked somewhere without version control, and I have used version control very extensively in past private activities,** including for correspondence, to keep track of config files, and, of course, for my website.

*Off the top of my head, and in likely order of first encounter, PVCS, CVS, Subversion, Git, Perforce. There was also some use of RCS during my time at Uni. (Note that the choice of tools is typically made by the employer, or some manager working for the employer, and is often based on existing licences, company tradition, legacy issues, whatnot. This explains e.g. why Perforce comes after Git in the above listing.)

**Early on, CVS; later, Subversion.

However, at some point, I grew lazy, between long hours in the office, commutes, and whatnots, and I increasingly cut out the overhead—and, mostly, this worked well, because version control is often there for when things go wrong, just like insurance. For small and independent single files, like letters, more than this indirect insurance is rarely needed. (As opposed to greater masses of files, e.g. source code to be coordinated, tagged, branched, maintained in different versions, whatnot.) Yes, using proper version control is still both better and what I recommend, but it is not a game changer when it comes to letters and the like, unlike e.g. a switch from WYSIWYG to something markup-based.

Then I took up writing fiction—and dropped the ball completely. Of course, I should have used version control for this. I knew this very well, but I had been using Perforce professionally for a few years,* had forgotten the other interfaces, and intended to go with Git over the-much-more-familiar-to-me Subversion.

*Using Perforce for my writings was out of the question. The “user experience” is poor relative e.g. Subversion and Git; ditto, in my impression, the flexibility; Perforce is extremely heavy-weight in setup; and I would likely have needed a commercial licence. Any advantages that Perforce might or might not have had in terms of e.g “enterprise” functionality were irrelevant, and, frankly, brought nothing even in the long-running-but-smallish professional project(s) where I used it.

But I also did not want to get bogged down refreshing my memory of Git right then and there—I wanted to work on that first book. The result? I worked on the book (later, books) and postponed Git until next week, and next week, and next week, … My “version control” at this stage consisted of a cron-job* that created an automatic snapshot of the relevant files once a day.**

*Cron is a tool to automatically run certain tasks at certain times.

**Relative proper version control, this implies an extreme duplication of data, changes that are grouped semi-randomly (because they took place on the same day) instead of grouped by belonging, snapshots (as pseudo-commits) that include work-in-progress, snapshots that (necessarily) lack a commit message, snapshots that are made even on days with no changes, etc. However, it does make it possible to track the development to a reasonable degree, it allows a reasonable access to past data (should the need arise), and it proved a decent basis for a switch to version control (cf. below). (However, some defects present in the snapshots cannot be trivially repaired. For instance, going through the details of various changes between two snapshots in order to add truly helpful commit messages would imply an enormous amount of work and I used much more blanket messages below, mostly to identify which snapshot was the basis for the one or two commits.)

Then came the end of 2021, I still had not set up Git, and my then notebook malfunctioned. While I make regular backups, and suffered only minimal data loss, this brought my writings to a virtual halt: with one thing and another, including a time-consuming switch from Debian to Gentoo and a winter depression, I just lost contact. (And my motivation had been low for quite some time before that.) Also see e.g. [1] and a handful of other texts from early 2022, which was not a good time for me.

In preparation to resume my work (by now in 2023…) on both my website and my books, I decided to do it properly this time. The website already used Subversion, which implied reacquainting myself with that tool, and I now chose to skip Git for the books and go with Subversion instead.*

*If in doubt, largely automatic conversion tools exist, implying that I can switch to Git if and when I am ready to do so, with comparatively little effort and comparatively little loss, even if I begin with Subversion. (And why did I not do so to begin with?) Also see excursion.

(Note: A full understanding of the below requires some acquaintance with Subversion or sufficiently similar tools, as well as some acquaintance with a few standard Unix tools.)

So, how to turn those few years of daily snapshots into a Subversion repository while preserving history? I began with some entirely manual imports, in order to get a feel for the work needed and the problems/complications that needed consideration. This by having (an initially empty) repository and working copy, copying the files from the first snapshot into the working copy, committing, throwing out the files,* copying in the files from the second snapshot,* taking a look at the changes through “cvs status”, taking corresponding action, committing, etc.

*Leading to a very early observation that it is better to compare first and replace files later. Cf. parts of the below. (However, “throwing out the files” is not dangerous, as they are still present in the repository and can easily be restored.)

After a few iterations, I had enough of a feel to write a small shell script to do most of the work, proceeding by the general idea of checking (using “diff -rq” on the current working copy and the next snapshot) whether any of the already present files were gone (cf. below) and, if not, just replacing the data with the next snapshot, automatically generating “cvs add” commands for any new files, and then committing.

The above “if not” applied most of the time and made for very fast work. However, every now and then, some files were gone, and I then chose to manually intervene and find a suitable combination of “svn remove” and, with an eye at preserving as much as possible of the historical developments, “svn move”.* (Had I been content with losing the historical developments, I could have let the script generate “svn remove” commands automatically too, turning any moves into independent actions of remove-old and add-new, and been done much faster.) After this + a commit, I would re-run the script, the “if not” would now apply and the correct remaining actions would be taken.**

*See excursion.

**If a file had been both moved and edited on the same day/in the same snapshot, there might now be some slight falsification of history, e.g. should I first have changed the contents and then moved the file. With the above procedure, Subversion would first see the move and then the change in contents. Likewise, a change in contents, a move, and a further change in contents would be mapped as a move followed by a single change in contents. However, both the final contents of the day and the final file name of the day are correctly represented in Subversion, which is the main thing.

All in all, this was surprisingly painless,* but it still required a handful of hours of work—and the result is and remains inferior to using version control from the beginning.

*I had feared a much longer process, to the point that I originally had contemplated importing just the latest state into Subversion, even at the cost of losing all history. (This was also, a priori, a potential outcome of those manual imports “to get a feel for the work needed”. Had that work been too bothersome, I would not have proceeded with the hundreds of snapshots.)

(There was a sometime string of annoyances, however, as I could go through ten or twenty days’ worth of just calling the script resp. of the “if not” case, run into a day requiring manual intervention, intervene, and proceed in the hope of another ten or twenty easy days—but instead run into several snapshots requiring manual intervention in a row. As a single snapshot requiring manual intervention takes longer than a month’s worth of snapshots that do not, this was a PITA.)

Excursion on disappearing files:
There were basically three reasons for why an old file disappeared between snapshots:

  1. I had (during the original work) moved it to another name and/or another directory. I now had to find the new name/location and do a “svn move” to reflect this in the repository. (And sometimes a “svn mkdir”, when the other directory did not already exist. If I were to begin again, I would make the “svn mkdir” automatic.) Usually, this was easy, as the name was normally only marginally changed, e.g. to go from “tournament” to “23_tournament”, corresponding to a chapter being assigned a position within the book; however, there were some exceptions. A particular hindrance in the first few iterations was that I failed to consider the behavior of the command line tool “diff” (not to be confused with “svn diff”), which I used to find differences between the state in the repository and the next snapshot: a call like “diff -rq” upon two directories does show what files are present in the one but not the other, but if a (sub-)directory is missing, the files in that directory are not listed in addition to the directory it self. (With the implication that I first have to “svn mkdir” the new directory, and only after will “diff -rq” show me the full differences in files.) This complication might have made me misinterpret a few early disappearing files as belonging to one of the following items, instead of this item, because I could not see that the file had been moved. Another complication was when a file had been given a new name with a less obvious connection, which happened on some rare occasions.
  2. I had outright deleted it, be it because the writing was crap, because the contents did not fit well with the rest of the story, or because it served some temporary purpose, e.g. as a reminder of some idea that I might or might not take up later. In a particularly weird case, I had managed to save a file with discus statistics with my writings, where it absolutely did not belong. (I am not certain how that happened.) These cases resulted in a simple “svn remove”.
  3. I had integrated the contents into another file and then deleted the original file, often with several smaller files being integrated into the same larger file through the course of one day. Here, I used a “svn remove” as a compromise. Ideally, I should have identified the earlier and later files, committed them together, and given them an informative commit message, but the benefits of this would have been in no proportion to the additional effort. (This is a particularly good example of how proper version control, with commits of changes as they happen, is superior to mere daily snapshots.)

In a more generic setting, I might also have had to consider the reverse of the last item, that parts or the whole of a larger file had been moved to smaller new files, but I knew that this had been so rare in my case, if it had happened at all, that I could ignore the possibility with no great loss. A similar case is the transfer of some parts of one file into another. This has happened from time to time, even with my books, e.g. when a scene has been moved from one chapter to another or when a part of a file with miscellanea has found a permanent home. However, it is still somewhat rare and the loss of (meta-)information is lesser than if e.g. an atomic “svn move” had been replaced with a disconnected “svn remove”–“svn add” sequence. (Other cases yet might exist, e.g. that a single file was partially moved to a new file, partially integrated into an old one. However, again, these cases were rare relative the three main items, and relatively little could be gained from pursuing the details.)

Excursion on some other observations:
During my imports, I sometimes had the impression that I had structured my files and directories in an unfortunate manner for use with version control, which could point to an additional benefit of using version control from day one. A particular issue is that I often use a directory “delete” to contain quasi-deleted files over just deleting them, and only empty this directory when I am sure that I do not need the files anymore (slightly similar to the Windows Recycle Bin, but on a one-per-directory basis and used in a more discretionary manner). Through the automatisms involved above, I had such directories present in the snapshot, added to Subversion during imports, files moved to them, files removed from them, etc. Is this sensible from a Subversion point of view, however? Chances are that I would either not have added these directories to the repository in the first place, had I used Subversion from the beginning, or that I would not have bothered with them at all, within or without the repository, as the contents of any file removed by “svn remove” are still present in the repository and restorable at will. Similarly, with an eye at the previous excursion, there were cases of where I kept miscellanea or some such in one file, where it might have been more Subversion-friendly to use a separate directory and to put each item into its own file within that directory.

As a result of the above procedure, I currently have some files in the repository that do not belong there, because they are of a too temporary nature, notably PDFs generated based on the markup files. Had I gone with version control to begin with, they would not be present. As is, I will remove them at a later time, but even after removal they will unnecessarily bloat the repository, as the data is still saved in the history. (There might be some means of deleting the history too, but I have not investigated this.) Fortunately, the problem is limited, as I appear to have given such temporary files a separate directory outside of the snapshot area at a comparatively early stage.

When making the snapshots, I had taken no provisions to filter out “.swp” files, created by my editor, Vim, to prevent parallel editing in two Vims and to keep track of changes not yet “officially” written to disk. These had to be manually deleted before import. (Fortunately, possible with a single “find -iname ’*.swp’ -delete” over all the snapshots.) There might, my memory is vague, also have been some very early occurrence when I accidentally did add some “.swp” files to the repository and had to delete them again. Working with Subversion from day one, this problem would not have occurred.

I had a very odd issue with “svn mkdir”: Again and again, I used “svn add” instead, correctly received an error message, corrected myself with “svn mkdir”—and then made the exact same mistake the next time around.* The last few times, I came just short of swearing out loud. The issue is the odder, as the regular/non-svn command to create a directory is “mkdir”, which should make “svn mkdir” the obviously correct choice over “svn add”.

*If a directory already exists in the file system, it can be added with “svn add”, but not new ones created. If in doubt, how is Subversion to know whether the argument given was intended as a new directory or as a new file?

Excursion on Git vs. Subversion:
Git is superior to Subversion in a great many ways and should likely be the first choice for most, with Subversion having as its main relative strength a lower threshold of knowledge for effective and efficient use.* However, Git’s single largest relative advantage is that it is distributed. Being distributed is great for various collaborative efforts, especially when the collaborators do not necessarily have constant access to a central repository, but is a mere nice-to-have in my situation. Chances are that my own main benefit from using Git for my books would have been a greater familiarity with Git, which would potentially have made me more productive in some later professional setting. (But that hinges on Git actually being used in those settings, and not e.g. Perforce. Cf. an above footnote.)

*But this could (wholly or partially) be a side-effect of different feature sets, as more functionality, all other factors equal, implies more to learn. (Unfortunately, my last non-trivial Git use is too far back for me to make a more explicit comparison.)

Excursion on automatic detection of what happened to deleted files:
I contemplated writing some code to attempt an automatic detection of moved files, e.g. by comparing file names or file contents. At an early stage, this did not seem worth the effort; at a later stage, it was a bit too late. Moreover, there are some tricky issues to consider, including that I sometimes legitimately have files with the same name in different directories (e.g. a separate preface for each of the books), and that files could not just have been renamed but also had their contents changed on the same day (also cf. above), which would have made a match based on file contents non-trivial.* Then there is the issue of multiple files being merged into a new file… My best bet might have been to implement a “gets 80 percent right based on filenames” solution and to take the losses on the remaining 20 percent.

*One fast-to-implement solution could be to use a tool like “diff” on versions of the files that have been reformatted to have to one word per line, and see what proportion of the lines/words come out the same and/or whether larger blocks of lines/words come out the same. This is likely to be quite slow over a non-trivial number of files and is likely to be highly imperfect in results, however. (The problem with more sophisticated solutions, be they my own or found somewhere on the Internet, is that the time invested might be larger or considerably larger than the time saved.)

Excursion on general laziness:
More generally, I seem to have grown more lazy with computer tools over the years. (As with version control, I will try to do better.) For instance, the point where I solve something through a complex regular expression instead of manual editing has shifted to require a greater average mass of text than twenty years ago. Back then, I might have erred on doing regular expressions even for tasks so small that I actually lost time relative manual editing, because I enjoyed the challenge; today, I rarely care about the challenge, might require some self-discipline to go the regexp route, and sometimes find myself doing manual editing even when I know that the regexp would have saved me a little time. (When more than “a little time” is at stake, that is a different story and I am as likely to go the regexp route as in the past.)

Excursion on “perfect is the enemy of good”:
This old saying is repeatedly relevant above, most notably in the original decision to go with Git (a metaphorical “perfect”) over Subversion (a metaphorical “good”), which indirectly led to no version control being used at all… I would have been much better off going with Subversion over going with daily snapshots. Ditto, going with Git over snapshots, even without a refresher, as the basic-most commands are fairly obvious (and partly coinciding with Subversion’s), and as I could have filled in my deficits over the first few days or weeks of work. (What if I screwed up? Well, even if I somehow, in some obscure manner, managed to lose, say, the first week’s worth of repository completely, I would still be no worse off than if I had had no repository to begin with, provided that the working copy was preserved.) However, and in reverse, I repeatedly chose “good” over “perfect” during the later import, in that I made compromises here and there (as is clear from several statements).

Excursion on books vs. code:
Note that books are easier to import in this manner than code. For instance, with code, we have concerns like whether any given state of the repository actually compiles. While this can fail even with normal work, the risk is considerably increased through importing snapshots in this manner, e.g. because snapshots (cf. above) can contain work-in-progress that would not have been committed. With languages like Java, renaming a class requires both a change of the file contents and the file name, as well as changes to all other files that references the class, and all of this should ideally be committed together. Etc. Correspondingly, much greater compromises, or much greater corrective efforts, would be needed for code.

Excursion on number of files:
A reason for why this import was comparatively slow is the use of many files. (Currently, I seem to have 317 files in my working copy, not counting directories and various automatically generated Subversion files.) It would be possible to get by with a lot less, e.g. a single file per book, a TODO file, and some few various-and-sundry. However, while this would have removed the issue of moved files almost entirely, it would have been a very bad idea with an eye at the actual daily work. Imagine e.g. the extra effort needed to find the right passage for editing or the extra effort for repeatedly jumping back and forth between different chapters. Then there is the issue of later use of the repository, e.g. to revisit the history, to find where an error might have been introduced, whatnot—much easier with many smaller files than several large ones.

(As to what files I have, in a very rough guesstimate: about a quarter are chapters in one of the books, about two dozen are files like shell scripts and continually re-used LaTeX snippets, some few contain TODOs or similar, some few others have a different and varying character, and the remaining clear majority are various pieces of text in progress. The last include e.g. chapters-to-be, individual scenes/passages/whatnot that might or might not be included somewhere at some point, and mere ideas that might or might not be developed into something larger later on.)

Written by michaeleriksson

March 23, 2023 at 8:59 am

Missing pingbacks / W-rdpr-ss drops the ball again

with one comment

As a more administrative notice:

For some reason, I have seen no notifications of pingbacks between my own posts in the last two weeks. Logging in today to check, I see that the missing pingbacks are not in my spam folder, nor are they in their usual place in the moderation folder (with only the email notification missing), nor have they been automatically approved—they simply are missing.

I do not know why—and it is a bloody shame, as automatic pingback handling was one of the few things that W-rdpr-ss almost did well and one of the few things that made W-rdpr-ss somewhat worthwhile. (Almost? Well, why the hell should I need to moderate a pingback sent from one of my own posts to another of my own posts? Idiotic!)

I have just worked through various settings and whatnots in my admin area. A few settings (if added recently or having their semantic recently changed) might be problematic, and I have experimentally changed them, including one relating to the maximum number of links in a comment.* However, I simply do not have the time to engage in further trouble-shooting and experiments.

*This should not have any reasonable effect, as any individual pingback only amounts to one link, even should the post from which it stems have many links. However, my experiences with W-rdpr-ss show that it is run by idiots and I first noticed the issue after having posted a text with an unusual number of links in it.

Written by michaeleriksson

March 1, 2023 at 9:38 pm

The deserving “deserve” / Follow-up: Some unfortunate words and uses

leave a comment »

In an older text ([1]), I spoke strongly against most uses of “deserve” (here and elsewhere taken to include variations, e.g. “deserving”). Since then, I have noticed that I use this word comparatively often myself, e.g. earlier today ([2]). In some cases, it could be that I should follow my own recommendations and use e.g. “has earned” over “deserves”; however, there is more to it. Consider a mention in [2] (emphasis added):

For instance, if some variation of the second scheme was implemented, a teacher who just happens to have several genuine A-students (as opposed to “got an A, because everyone gets an A”-students) could effectively be punished for giving the A-students the grade that they truly deserve.

How should this best be expressed without “deserve”? A “have earned” is tempting, and many, especially those naive about school, might resort to this; however, what grade is, in some sense, deserved moves on a different level—and one of the problems with school is a naive failure to see the difference. (Note e.g. the writings of educationrealist, who has repeatedly dealt with such failure and its consequences, e.g. in [3] and [4].)

For instance, if we look at a naive school (school system, teacher, whatnot), someone might “earn” a certain grade by putting in the busy work, doing the right assignments, etc.—but if the intended knowledge and understanding does not manifest, how can we say that the student deserves a good grade, even should he, on paper, have earned it?* Vice versa, if a student has not done various busy work, maybe because he already has mastered the matter-to-be-mastered and knows it, he might still deserve a good grade based on his knowledge and understanding—and he would arguably do so without having earned it. (Certainly, some teachers would argue that he has not earned it, and, worse, follow with the poor conclusion that “because he has not done the busy work, he is also undeserving”.)

*Here we might also have complications like the possibility that someone would have gained a higher degree of proficiency, had it not been for boring busywork or a teacher who was detrimental to own thinking in the students. If such a student fulfills the “legal” requirements, denying the “earned” would be very unfair, and even a “deserve” could conceivably be argued, if a “deserved” of a different type than the one discussed above.

Other variations that might arise from [1] include “right”—but by what standard does someone have the right to a certain grade? An a priori right certainly does not exist. (Although some out-of-touch-with-reality hyper-egalitarians might want to claim exactly this, maybe because meaningful grades would be “social injustice” or “White supremacy” and, therefore, everyone must get an A.) Even after completion of the course/class/whatnot, a right must be contingent on some accomplishment or demonstration, e.g. the passing of a test, and here the problems begin: For instance, the wrong type of teacher might argue that the right arises automatically and solely from doing the right assignments, and we are back at square one.* For instance, a deserving student might not be offered a sufficiently good chance at earning that right, e.g. because of a poor system with no sufficient evaluation. For instance, a deserving student might miss the offer through being ill at the wrong time. In the latter two cases, he would still be deserving, but he would not have earned the right, and a distinction between the two is clearly needed.**

*The issue might be partially resolved through making a differentiation between a “true” right and a merely perceived right, but, in a next step, we land at issues like whose perception is just perception and whose perception reflects that “true” right. Alternatively, we might argue a difference between a “true” right and a “legalistic” right, but this has similar issues and the difference between such a “true” right and a “deserves” might not be worth the bother.

**In all fairness, even when we speak in terms of deserving, a differentiation between begin deserving and having proven oneself deserving will often be necessary.

More generally, there are very many cases of “deserve”, especially in a negative direction, that I am on board with, e.g. that someone like Fauci or Birx might deserve to be prosecuted, with a severe jail sentence as a possible result, but where other formulations are problematic. For instance, with Fauci and Birx, it is not a given that there is sufficient legal grounds to prosecute them, which makes “earned” and “right”* problematic. (And, off topic, I suspect that any prosecution would end with a big nothing.)

*Another problem is, of course, that rights relate to positives; however, this is more a matter of words than of principle. We might simply replace “right” with some variation of “obligation” in appropriate contexts and/or view the right involved here as something belonging to the victims.

In many ways, this type of “deserve” does involve some abstract, and often subjective, moral angle, and claims like “I deserve” and “he deserves” do have a niche to fill. (From a linguistic point of view. It does not automatically follow that any given claim is factually/ethically/whatnot justified—and my original complaint is rooted in the many cases of wishful thinking, propaganda, etc.) Nevertheless, the points of [1] largely hold, especially in that better words should be used when they fit and that we must differ between “I deserve” and e.g. “I want”.

Written by michaeleriksson

February 11, 2023 at 4:14 pm

Posted in Uncategorized

Tagged with , , , ,

Follow-up: Some thoughts on generalization

leave a comment »

Regarding my previous text, I belatedly recall one issue, intended for an excursion, that was lost during those weeks of delay: a differentiation based on motivation. (Especially, but not necessarily exclusively, in art.) This is addressed to some degree, but not sufficiently so.

For instance, in a footnote, I use an example of a “young woman in a still-covers-a-lot bikini” as a potential shocker in a movie released at one time, who might be replaced with e.g. a “young woman in a modern bikini” at a later time, and, even later, with a “young woman having sex on the beach”. If this is done out of a failure to generalize, it is fairly pointless; however, it is conceivable that someone wanted to show that “young woman having sex on the beach” on all three occasions and that the sentiments of the time, movie censorships, or similar, only ever allowed it on the third occasion. (Similar to how someone like George Lucas might have wanted to make a certain type of CGI or SFX movie in year X, but had to wait until year Y for technology to catch up with vision.)

We might still dispute whether a scene with sex on the beach is preferable to a scene with a bikini,* but the motivation is different and the level of naivety of the film-maker is lower.

*Non-porn attempts to show sex on screen are usually more turn-offs than turn-ons to me, be it because they are hard to do well, because they simply are done poorly, because they waste time, or for some other reason. This while nubile young women running around in bikinis tend to have a much more positive effect.

Similarly, if someone had an attitude of “I dislike movie censorship and I will push the borders of the allowed today, in the hope that more will be allowed tomorrow”, this would be partially legitimate. We might, again, dispute whether this is a good attitude and a sound priority,* but it is not a result of a failure to generalize.

*I am strongly in favor of freedom of speech (etc.), but it is often the case that more restrictive and/or “wholesome” movies are more enjoyable. (If in doubt, having fewer options might force a better and more thought-through approach. For instance, the shower scene in “Psycho”, while likely shocking by the standards of the day, actually shows very little of anything, lets the music and the viewer’s imagination do much of the work, and is still much more effective than what might be seen in a modern slasher movie.) As is often the case, the ideal situation would be that certain scenes are allowed but that they are only used when the situation truly calls for it—there is a reason why phrases like “gratuitous sex” are so commonly used about fiction.

However, where an attitude of “more skin”, “more erotica”, “sex sells”, or similar is understandable in some movie contexts, most other border-pushing seems to be more in line with my previous text, especially when the border-pushing takes place for reasons like shock value or with some naive agenda of jolting-the-bourgeois-audience-out-of-its-complacency. (The latter borders on being childish, is unlikely to work, and, to the limited degree that it possibly might work, presumes that the audience is as poor at generalization as the presumptive jolter. Cf. my previous remarks on fictional murder-as-art.)

Consider “The Cook, the Thief, His Wife & Her Lover”: It is in some ways an excellent movie, I have seen it twice,* and I am open to seeing it a third time at some point in the future. However, it contains a number of scenes that detract from the movie by just being disgusting, while bringing no particular value relative a more restrained version of the same scene. (The often nightmarish or surreal atmosphere, for instance, is not dependent on the scenes that I found detrimental.) Now, I do not know what Greenaway tried to achieve and what his motivations were, but reducing the disgust level might have made for a better movie and if he did try to e.g. shock his audience, he might well have been naive. (Modern wanna-be shockers might take note that this movie is already more than thirty years old and that the same applies to e.g. Peter Jackson’s grotesque “Braindead”. Chances are that you will just embarrass yourselves.)

*Once in the 1990s; once, maybe, two years ago.

As an aside, there was a point where pushing the border further would have been beneficial, but where the easy way out was taken: At the end of the movie, the villain is forced, at gun point, to eat the flesh of a man that he had murdered (or ordered murdered?). He takes a dainty bite and is then shot in an act of revenge. Here, it might have made sense to prolong the eating until such a point that he could not bring himself to go on, or e.g. threw up, and only then to shot him. As is, the scene is almost anti-climatic, especially in light of the foregoing scenes. (Maybe Greenaway had some thought behind this, but, if so, I can only guess at what it might have been. I doubt that cannibalism was a taboo that much bigger, relative the other scenes, even back then.)

Written by michaeleriksson

February 3, 2023 at 4:02 am

Some thoughts on generalization

with one comment

The area of generalization, extrapolation, abstraction, analogies, etc., can be quite interesting—as can the question of how to best handle it. (For simplicity, I will mostly speak in terms of just “generalization”, and the examples might be tilted towards specifically generalization, but the word should be taken in a widened meaning of “generalization, extrapolation, […]”.)

For instance, with most* texts that I write, I find that the contents apply (in whole or in part; with or without relevant modifications; with or without abstraction; whatnot) to other areas. When should I mention this in the respective text resp. when would the mention bring value to the reader? This is a judgment call, which usually turns out as “don’t bother” with me. Factors to ideally consider include how obvious the generalization is (mentioning the too obvious can be a waste of the reader’s time and/or seem insulting), how far-going a generalization might give a benefit, how many possible generalizations there are (the more there are, the more work there is likely to be), the issue of how much text is needed to make the generalization worthwhile,** etc. Then there is the observation that spending more time thinking about a topic might bring forth a slew of new generalizations, which could lead to never-ending work.

*And chances are that the exceptions arise from my not having spotted the generalization(s) yet—not from an absence of possible generalizations.

**I might just mention that “X generalizes to Y”, but this need not be that interesting without a deeper discussion of the details and consequences. (This, especially, in areas like math. An advantage of usually writing about politics and similar topics, not math, is that the additional discussion needed can be much smaller.)

In most cases, however, I do not so much engage in a deep analysis as rely on the mood of the moment—if I at all remember that there was a generalization that could be mentioned. A recurring factor in my decisions is that I often am a little tired and/or tired-of-the-topic when I have a text done, and adding another paragraph to discuss generalizations is not very enticing at this stage. Here we also see an example of how trivial or obvious the generalizations can be. For instance, the same idea applies to virtually any tiring activity, or situation of being tired, that has an optional continuation—but that goes without saying for most readers. And where should the generalization be stopped? Replace “tiring” with “boring” and something similar applies. Ditto “painful”. Ditto even normally positive things, once we enter “too much of a good thing” territory. Abstracting and generalizing to find some single formulation might bring about the tautological “when I am disinclined too continue, I am disinclined too continue”, which truly is too trivial to bother with, might be too detached from the original situation (as being “tired” does not automatically imply “disinclined too continue”), and might still not be the end of the scale. (For instance, a similar idea might apply to a great many other contexts; for instance, a “when X, then X” is a further generalization, just of a different type.)

Similarly, that I might “rely on the mood of the moment” over a deep analysis is not unique to this situation. It can also affect e.g. what I buy in a grocery store, and the generalization starts again. But now we have two different ideas that both generalize, which allows us a generalization about generalizations…

An interesting complication, in a generalization (!) of an older text ([1]), is that adding a generalization to some ideas could conceivably raise an expectation in the reader that I add generalizations whenever I am aware of one. If there is a too obvious generalization of another idea, or a second of the first idea, that I do not mention, then I might look foolish in front of this reader. Of course, the fact that I occasionally have such concerns, while the typical reader is unlikely to even care or notice, generalizes another portion of [1]. Potential further generalizations of this generalization include that “many pay too much attention to the opinions of others”, “many overestimate how much others might care”, and “many fear non-existent threats”, with further generalizations of these possible. Then we have the conflict between my intellectually knowing that few readers will care/notice and my instinctually imagining that they will, usually followed by a quick suppression of my instinct by my brain—which, you guessed it, generalizes. (I will not mention further cases in the continuation, but they are plentiful.)

Of course, the amount and direction of generalization that is appropriate in a given context need not be the same in another context. For instance, if someone working on a specific physics problem makes a novel mathematical observation, this observation is likely to have an analogue in other problems and other areas, where the equivalent math appears, but this might simply not be of immediate interest. For someone working on such another problem, the situation might be different, but it is not a given that more than a one-off generalization to that single other problem is wanted. However, once a mathematician with the right interests gets his hand on the original observation, it might be generalized one or two steps fairly rapidly—and another one or two steps when some mathematicians with other, especially more abstract, interest gets involved. Etc.

It might even be argued that the ability to find the right level of generalization for the task at hand is more important than the ability to find generalizations. (And this level might in many contexts be “no generalization”.)

However, generalization is often something positive, for instance as a means too avoid reinventing the wheel, which can all too easily happen when workers in different fields encounter similar problems. Consider e.g. how often different physical phenomena are, at least too a decent approximation, governed by the same differential equations and how wasteful it would be to develop the same methods of solution in the case of each individual phenomenon—possibly, including the repeated development of the idea of differential equations… Mathematicians are particularly keen on such generalization, e.g. by showing that a certain set and associated operators match a known “algebraic structure”, after which they know that all results of that algebraic structure applies equally to the new case.

In other cases, a failure to abstract can be outright wasteful or harmful in other ways. Consider various arts, including painting and the theatre, where there has been a long history of new artists trying to outdo the previous generation in e.g. the breaking of norms, the “shock value”/provocation, and where to draw the border between art and non-art.* But why? If someone manages to find/create something truly thought-worthy, truly original, truly unforeseen, truly value-bringing as an extrapolation, whatnot—by all means. This has rarely been the case, however: most of what has been considered provocative has been well within what even the layman has been able to imagine on his own, has been an natural extrapolation of previous provocation,** has long been exceeded in less “artsy” contexts,*** or similar. I have, e.g., encountered fictional depictions of artists that have gone as far as to consider murder an art, incorporate murder in performance art, murder for artistic provocation, use body parts of a murder victim as art, and similar.**** What could a real-life artist do that would move me beyond the borders of what I have already seen in fiction or could myself conceive? Splash a bucket of pig’s blood on an empty canvas and call it art? Please! Why not just draw the natural conclusion that this type of provocation, escalation of provocation, whatnot is pointless and will often do more harm than good to the art at hand.*****

*As opposed to e.g. just experimenting further in some direction for more artistic purposes, say, to find out what the effect on a canvas is when a certain school of painting is pushed further and whether the result is worthwhile.

**In principle, if not in detail. For instance, if we start in a very straight-laced era of movie-making and have a “shocker” of showing a young woman in a still-covers-a-lot bikini, the next escalation of shock might consist of a showing a young woman in a modern bikini, showing a topless young woman from behind, or similar. To predict the exact escalation is hard; to predict the general nature of the escalation (and the risk of an escalation) is a different matter. Even an escalation to, say, a completely naked young woman having sex on the beach would be more a matter of quantity than quality, of taking several steps of escalation at once. Going in another direction, imagining a young man or an old woman in a bikini does not take a revelation either—but why would anyone wish to see them?

***Contrast e.g. a sexually explicit art movie with a porn movie.

****Note e.g. portions of “Dexter”, but the idea is somewhat common.

*****Unless the artist follows some disputable non-artistic agenda, e.g. to change societal norms, to ruin this or that art form for the “wrong” persons, or, even, to ruin art. While I do not think that such an attitude is common, it is certainly possible, compatible with the behavior of many political activists in other areas, and compatible with some other excesses. Consider e.g. how some seem to take the laudable attitude of “function should take priority over aesthetics” and amend it with a despicable, unreasonable, and irrational “ergo, we should deliberately strive to make buildings ugly”.

More generally, it is often the case that certain ultimate extrapolations and generalizations follow immediately to the reasonably intelligent, but that mid- or nitwits, who are themselves poor at generalization, try to take each individual step at a time. A good (fictional) example is found in Lewis Carroll’s “What the Tortoise Said to Achilles”, where just one or two iterations should have been enough to prove the point, but Achilles was too dense to understand this—and, maybe, the turtle too dense to understand that he had just caught himself in a trap, where his best bet would be to wait for Achilles to fall asleep and then make a crawl for it. An interesting parallel to this is the idea of plus-one, infinity, and infinity-plus-one:

Consider two kids arguing over the size of something, say, who has the greatest desire for a last piece of cake. A stereotypical dialogue might then include something along the lines of “I want it more than you!”, “Hah! I want it twice as much you!”, “Hah! I want it thrice as much you!”, etc., until someone drops the bomb of “I want it infinity times as much as you!”. Exactly how to consider the introduction of infinity is a tricky question, but something like “extrapolation of what events are likely, followed by an attempt to defeat the extrapolation” might be close to the mark, which could be seen as a case of successful generalization (in the extremely wide sense used in this text). Moreover, infinity as such could* be seen as an extremely interesting extrapolation of large numbers. However, we also see a failed generalization, as both kids, unless first-time participants, should have realized that infinity was coming and that whoever first dropped the bomb would “win”.

*The conditionality hinges on whether it is an extrapolation, not on whether it is interesting.

In a next step, we could have the other kid either conceding or trying some variation of “infinity plus one”—and their argument might then have turned to whether this “infinity plus one” was or was not larger than infinity. Here we land at a very interesting question, as mathematicians consider infinity-plus-one in the sense of addition equal to infinity, meaning that attempting to trump infinity with infinity-plus-one is as pointless as trying to trump three with three (resp., above, thrice with thrice). In this sense, the argument could finally degenerate into whether an “infinity vs. infinity” standoff should be considered a draw or a victory for the first invoker of infinity. But, while mathematicians consider infinity + 1, infinity + 2, and even infinity x 25 equal to infinity, they also have a generalized version of the “successor operator” implied by plus-one.* Here we have a generalization arguably bringing something more interesting than even infinity—the idea of a number larger than infinity.** In the unlikely event that infinity-plus-one was intended as this successor operator, not as a mere addition of one, the other kid would have kept the game alive.

*As in 1 + 1 = succ(1), 2 + 1 = succ(2) = succ(succ(1)), etc., for a suitable operator succ, which for integers is simply that same thing as adding one—but where succ(infinity) is something different. By such generalization of the successor operator, we then have a hierarchy of non-equivalent infinities. The “vanilla” infinity presumably intended by the triumphant kid would then carry the more specific name aleph0. (Assuming the most common approach of using “cardinal numbers” and with some oversimplification, as we potentially begin to compare apples and oranges.)

**But this idea might have originated from another line of reasoning, e.g. Cantor’s famous “diagonal proof” that there are more real than rational numbers, and its generalizations.

However, from here on, it is trivial to infer the possibility* of applying a successor operator to infinity infinitely often to form a super-infinity, a version of the successor operator that finds an even larger successor to the super-infinity, a super-duper-infinity and a successor to the super-duper-infinity, etc. Even trying to break out of this by e.g. constructing a super-duper-whatnot-infinity where the “whatnot” incorporates an infinity of terms stands the risk* of failing due to an even more generalized successor operator.

*As case has it, all these successors, super-duper-infinities, and successor operators exist, but their existence is not a given from the above. Without further investigations, we cannot infer more than the possibility. (This with some reservations for what qualifies as “existing” and what has what semantics.)

Apart from some minor editing, the above was written some weeks ago. At the point where the text ends, I was distracted by a text from my backlog, with more mathematical content, which fit well in context and would have clarified a few points above. Having written most of it, I found some issues that I wanted to mull over before finalization and failed to get around to it, which has led to these “some weeks” of delay. To avoid further delays of the current text, I have decided to put the other text back in the backlog. (Especially, as I could benefit from improving my markup language with regard to math before proceeding.) It is possible that some additional thoughts or sub-topics that I intended to include in the current text have been forgotten during this delay. Certainly, trying to go easy on the mathematically unknowledgeable, I run the risk of being sufficiently approximative with the truth as to annoy the mathematically knowledgeable, while not giving enough details for the unknowledgeable to be truly helped.

To, however, give two core ideas of the other text: (a) When we generalize a certain type of number, a certain algebraic structure, whatnot, there is rarely or never one single generalization, and statements made under the assumption of a single generalization can be faulty or simplistic. (E.g the claim that the square-root of -1 is i and/or -i, which truly amounts to something like “the field of complex numbers has the field of reals as a subfield and the number i from the field of complex numbers has the property that i^2 = -1 and (-i)^2 = -1”, which does not automatically preclude that some other field or other algebraic structure than the complex numbers has similar properties and provides another set of “roots”.) (b) The discussion of whether e.g. complex numbers and various infinities exist is in so far pointless as we can just abstractly define some set of elements and operations on these elements, use them when and where they happen to be useful, and forget questions like whether e.g. i is something real (non-mathematical sense) and/or something that “belongs” with the real (mathematical sense) numbers. For instance, the field of complex numbers can be quite useful in dealing with, say, calculations on electricity and magnetism, regardless of what nature we consider i to have—and there are fields equivalent to the complex numbers that do not even mention i.

Written by michaeleriksson

February 2, 2023 at 9:14 pm

Posted in Uncategorized

Tagged with , , , ,

Further reading tips: Facing Reality

leave a comment »

Since beginning my series of further reading tips a few months ago, I have not managed to add one single entry. (Also see excursion.) Time for a change:

Yesterday, I discovered a 2021 book by Charles Murray, “Facing Reality”, which had flown under my radar and which I highly recommend to those naive* on topics like “systemic racism” and U.S. demographics, or, more generally, naive on how much of the various “narratives” is out of touch with reality, with the actual facts and statistics at hand, with what actual science says, etc.

*My recurring readers are unlikely to find much new in terms of the big picture and the main ideas, but might find something new in detail. They might certainly still benefit from the data sets and additional references. (Points where I tend to be very weak for reasons of time and motivation.) Similarly, those familiar with Murray’s other works and/or works by similar authors might recognize the big picture and the main ideas.

It is a short but valuable read, gives considerable data (e.g. on crime) and analysis of data showing that claims about e.g. (pro-White/anti-Black) “systemic racism” are quite incorrect, and contains discussions about e.g. why “identity politics” and “intersectionality” are fundamentally flawed ideas (view individuals as individuals—do not define them by what groups they belong to). A key observation is that disparities in treatment and outcomes arise mainly from differences in behavior—not racism or racial discrimination. (No, you were not arrested because that cop was a racist pig. You were arrested because you robbed someone.) Often, the disparities arise despite pro-Black racial discrimination, notably with regard to college admissions.

The data is repeatedly combined with information on incorrect perceptions, e.g. that many overestimate the proportion of Blacks and Latinos* in the population very considerably, which gives a flawed baseline for any further thought on the matter. (Also note e.g. parts of [1], where I discuss some potential consequences of such incorrect perceptions, and an analogous situation for exaggerated COVID beliefs.) Generally, the issue of comparing against the right baseline is important, not just in the sense of knowing the right values but of actually picking the right one, for example, in that local rates must be measured against local circumstances, like local demographics, not the national ones. (My own go-to example is to compare e.g. arrest rates with what proportions of criminals belong to what group, not what proportions of the overall population.)

*Presumably, used in the same or almost the same sense as “Hispanics” in e.g. “The Bell Curve”. Note that he later switches to non-standard labels for various groups, including a plain “Latin” for Latinos/Hispanics.

Other contents include discussions of IQ, differences in IQ distributions between groups,* and disparities between common prejudice about IQ and what science says on the topic; how Blacks are admitted to college based on laxer criteria than Whites and Asians, and the negative consequences thereof; how job performance can differ between groups and must be factored in when we look at e.g. career success; the damage done to science and journalism by the restrictions that the current anti-intellectual far-Left climate imposes; and the potential harm from the many blanket accusations of racism. The latter includes a “Damned if you do; damned if you don’t” situation for retailers, who might have the choice between not servicing some neighborhoods (“Racist!”), hiking up prices to compensate for the greater rate of shop-lifting (“Racist!”), and taking a loss.

*As usual, any such references to groups refer to distributions, averages, and whatnot—not individuals.

The extensive notes include some interesting things too, apart from significant data and references, e.g. that “stereotype threat” would be more-or-less debunked by now. (Entirely unsurprising to me, seeing that this is how it tends to go with Leftist and/or social-science miracle explanations, but I had not hitherto heard of the debunking.)

A few important big-picture quotes:*

*Note that an ePub-to-text translation and later integration in my text might have led to e.g. formatting changes.

I DECIDED TO WRITE this book in the summer of 2020 because of my dismay at the disconnect between the rhetoric about “systemic racism” and the facts. The uncritical acceptance of that narrative by the nation’s elite news media amounted to an unwillingness to face reality.

By facts, I mean what Senator Daniel Patrick Moynihan meant: “Everyone is entitled to his own opinion but not to his own facts.” By reality, I mean what the science fiction novelist Philip Dick meant: “Reality is that which, when you stop believing in it, doesn’t go away.”

At the heart of identity politics is the truth that “who we are” as individuals is importantly shaped by our race and sex. I’ve been aware of that truth as I wrote this book — my perspective as a straight White male has affected the text, sometimes consciously and sometimes inadvertently. But identity politics does not limit itself to acknowledging the importance of race and sex to our personae. The core premise of identity politics is that individuals are inescapably defined by the groups into which they were born — principally (but not exclusively) by race and sex — and that this understanding must shape our politics.

I am also aware of a paradox: I want America to return to the ideal of treating people as individuals, so I have to write a book that treats Americans as groups. But there’s no way around it. Those of us who want to defend the American creed have been unwilling to say openly that races have significant group differences. Since we have been unwilling to say that, we have been defenseless against claims that racism is to blame for unequal outcomes. What else could it be? We have been afraid to answer candidly.

Over the last decade, on many campuses, the idea that a scholar’s obligation is to search for the truth has become disreputable — seen as only a cover for scholarship that is racist, sexist, or heteronormative. Scholars are criticized not for the quality of their work but for its failure to advance the cause of social justice. Work seen as hostile to that cause is met with calls for the scholar’s dismissal.

On the downside, Murray is still either too cowardly, too naive, or too conciliatory towards Leftist readers to get the full point out. For instance, he repeatedly writes as if there were a problem with extremism on the “Right”* of a similar size to that on the Left, which is utter bullshit. In as far as there are problems on the “Right”, they (a) are far smaller than the problems on the Left, (b) are often caused by the behavior of the Left (note a number of earlier texts, e.g. [2]), (c) tendentially concern groups with very little in common with the rest of the “Right” (cf. footnote*). Similarly, he repeatedly mentions existing (but non-systemic) racism, without proof of a non-trivial presence and without acknowledging that any such racism in today’s U.S. seems to tilt strongly anti-White, anti-Asian (by Blacks—not Whites), and/or pro-Black. Similarly, he takes an attitude that amounts to “it is a problem that people jump to conclusions about individuals based on crime rates”, where the far better attitude would be “it is a problem that people deny differences between groups in light of non-negative experiences with individuals”—or, for that matter, “it is a problem that those who are aware of crime rates are maligned for taking sensible precautions”.** Then there is his old and ignorant chestnut that “If Whites Adopt Identity Politics, Disaster Follows” (actual heading), for which he has yet to deliver any good arguments, where he fails to recognize that this, or rather a pro-White attitude,*** might become a necessity of self-defense if current trends continue, and where he ignores the importance of Whites to carry current U.S. society. Moreover, it repeats the Leftist fallacy that the kid who does get mad after being exposed to “Not touching! Can’t get mad!” would be at fault. Generally, he seems to be extremely naive and/or ignorant of the actual “Right” and, in parts, seems hooked on a Leftist narrative about the “Right” in a manner that he has warned others against in other areas.

*I re-iterate my observations that (a) the “Right”, unlike the Left, is too heterogeneous to be a meaningful grouping, (b) the “far Right” is not a more extreme version of the rest of the “Right”, unlike the far Left relative the Left.

**He partially re-addresses this theme in a more intelligent manner later in the book, and makes up for some of this misstep.

***The phrase “identity politics” has much farther-going connotations and involves other aspects than race, e.g. sex and sexual orientation.

Excursion on other reading tips:
As a part of my general backlog problem, I never seem to get around to the reading tips, and the problem is made the worse by a fading memory that would often necessitate a re-read before the actual writing. I will attempt a policy of making write-ups of “new” books immediately, and will address “old” books, even if more valuable, only if and when I have sufficient time and energy.

Excursion on Wikipedia:
To my surprise, I did not find any link to this book on Wikipedia.* However, I did visit the article about Murray, and found it in an inexcusable state, giving further support to my wish to avoid (English) Wikipedia. Most notably, right in the lede, it has the audacity to claim, in the context of “The Bell Curve”, that belief in genetic influence on group difference in IQ is “a view that is now considered discredited by mainstream science.”—which is extremely contrafactual. Among the sparse sources for this claim we find e.g. an article in the Guardian… This claim is the more problematic as (a) it is irrelevant to the main points of “The Bell Curve”, (b) its otherwise pointless inclusion in the lede indicates an attempt to discredit/defame Murray and/or “The Bell Curve” at an early stage,** (c) it could be interpreted by many readers to imply that IQ is not heritable (in general), which would be outrageously wrong. Wikipedia, plainly and simply, has turned into a hell-hole of far-Left reality distortion and propaganda—paralleling the issues with academia.

*Nor my current replacement, Infogalactic, which would follow naturally from its datedness problem. I have yet to make a thorough search for other potential Wikipedia replacements.

**Of which the Left has a long history, making book and author, themselves, an area where the uninformed masses have a radically wrong impression, in a manner similar to how the masses often have a radically wrong impression of “systemic racism”.

Written by michaeleriksson

January 31, 2023 at 10:54 pm

Posted in Uncategorized

Tagged with , , , ,

Follow-up: Some observations around a weird illness

leave a comment »

As noted in an earlier text, I had a weird-but-short illness almost three weeks ago—from almost top-shape to very ill to semi-shape in a day or so. (And almost top-shape another day later, but after publication.)

However, I still have a major problem with sleep and tiredness, if not as major as on the day in question. Compared to my normal state, I have been much more mentally sluggish, low in energy, unable to get to working, etc., through a large part of most days; and I have on several days lost a few hours entirely to failed attempts to go to sleep or to simply vegetating, because I have been too tired to even keep my eyes open. Today and yesterday have been particularly bad.

As of now, among the main candidates, I am uncertain whether this is some issue caused directly by the illness, just a temporary continuation of the sleep disturbance from the main day of illness, or a temporary sleep disturbance coincidentally occurring close in time to the illness. (I have experienced similar situations in the past, but never for so long and only rarely to such a degree.)

Written by michaeleriksson

January 28, 2023 at 4:54 pm

Posted in Uncategorized

Tagged with , , , ,

Better or more familiar? / Thoughts on works for children and their translations

leave a comment »

As I have recently repeatedly noted, there is a difference between being worse and being new/unfamiliar/whatnot (cf. [1], [2]). This brings a backlog item to the surface:

Over the years, I have encountered various works in different language versions, as with e.g. some English children’s books (in Swedish as a kid; in English for a nostalgia reading as an adult) and various older Disney movies (especially specific scenes through a Swedish Christmas tradition). Some comic franchises I have read in all of Swedish, English, German, and French (but not necessarily the same works within the franchise), especially when making early steps in the non-Swedish languages.

Normally, I find that translations are inferior (often, highly inferior) to the originals, as with some absolutely ridiculous German mistranslations of the works of Terry Pratchett or utterly absurd mistranslations of film titles and dialogue (English movies are usually dubbed in Germany)—up to and including the replacement of the original English title with another English title. However, with many of these early encounters, it is the other way around.

A particularly interesting case is the title “Alice in Wonderland”, with the implication of “Alice in the Land of Wonders”, vs. the Swedish “Alice i Underlandet”, which can be interpreted either as “Alice in the Land of Wonders” or as “Alice in the Land Below”*.** As a young child, seeing that both match the contents of the story well, I was fascinated by the question of which of the two was correct, to the point that it transcended the story as such. My memory is a little vague, but I suspect that I tendentially came down on the side of “Below” as the more natural interpretation. (As I grew older, I learned of the original title and straightened this out. I do not know whether the Swedish ambiguity was deliberate or fortuitous, but it was certainly fortunate.)

*Or “[…] Land Under” to be etymologically closer at the cost of a less natural English formulation. A variation with “Land Down Under” is tempting, especially in light of weird animals, but the Australians might complain. (“Alice in the Netherlands” would just be confusing.)

**Both are a little odd idiomatically. I would probably have expected “Alice i Undrens Land” in the former case and, maybe, a formulation with “Under Jorden” in the latter, to match some other tales. (This with reservations for changing idioms and that this is a spur-of-the-moment thought that might not hold up on closer inspection.)

Above, we had an objective advantage for the Swedish name, but in other cases I suspect that my preference is rooted in “more familiar”. Is e.g. “Kalle Anka”* a better name than “Donald Duck”? There is no obvious reason, but the former still sticks with me. (And imagine my reaction when I first heard of Paul Anka…)

*“Anka” is “Duck”; “Kalle” is the usual nickname for “Karl”, a common Swedish name. Cf. “Charles” and “Charlie”. (The overall name likely predates Carl Barks involvement with the Ducks and is unlikely to be a nod. However, with an eye at funny names, I have to ask: Carl Barks at whom?)

As a side-effect, such a name change can give a different set of associations. Consider “Scrooge McDuck” (Scottish* miser of a Dickensian level; maybe clan related; well suited for tartans, kilts, and whatnots) vs. “Joakim von Anka” (nobility; possibly German*; sophisticated and well suited for fancy jackets, canes, and spats). Or consider book titles, e.g. “The Wind in the Willows” vs. “Det Susar i Säven”** (also note “Alice in Wonderland” above): Here, the Swedish translation was likely chosen to preserve the alliteration; and in terms of charm, for want of a better word, it works as well or better (at least in my pre-conditioned ears). However, there is a shift in meaning and associations, as a willow is a tree and, while often associated with water, is not married to it. The Swedish “säv” appears to be the lakeshore bulrush or common club-rush, which needs a watery environment and certainly is not a tree. Looking at the contents of the book, much of it, especially early on, is river-centric, but much of it is not—which makes a willow a much better image than säv.

*Of course, I originally took all the Disney characters to be Swedish, as the opposite simply never occurred to me. (Excepting some who might have been explicitly presented as foreign resp. until such a time as their foreignness was mentioned.)

**Combined with the Latin name mentioned in the given link, a back-translation could amount to the wonderful “A Susurration in the Schoenoplectus Lacustris”—an unbeatable name for a book.

Songs, and often acting, from the old Disney movies also often strike me as better in the Swedish version, as with e.g. the “Silly Song” in “Snow White” or “Bella Notte” in “Lady and the Tramp”. (With reservations for the exact titles.) This in particular with regard to the lyrics, which often seem better chosen in Swedish.* Here we truly have a question of “better” vs. “more familiar”: On the one hand, there definitely is a “familiarity effect”; on the other, the early Disney (full-length) movies were extremely centered on animation and might well have prioritized other aspects of the movie (e.g. story, casting, music**) too low, which opened a window of opportunity for a local version to up the original a little.*** An additional possibility is that a translator who takes liberties with meanings and implications (as with e.g. “Det Susar i Säven” above) can gain an edge in some regard at the cost of less precision and less adherence to the actual intent. Note, as a related example, “In-A-Gadda-Da-Vida”, which is a nonsense version of “In the Garden of Eden”, but which gains an edge through fitting the melody in a smoother manner and which might well have been more successful with the nonsense lyrics/title than it would have been with a “proper” version.

*I will refrain from an analysis, as I would have to explain the Swedish lyrics with considerable effort and might still not bring the perceived difference across. I add the reservation that some Disney movies might have seen multiple Swedish dubs and that I refer to the older versions known to me.

**Notwithstanding that there are some truly genius melodies and/or musical performances here-and-there that stand in stark contrast to many lesser numbers. In terms of music, animation, and integration of the two, the “Silly Song”, above all, is a masterpiece. (“Fantasia”, of course, has strong music throughout, but it is not original music.)

***Which is not to say that such opportunities are automatically taken. German dubbing (also see above), which is unfortunately not restricted to children’s movies, typically moves between “awful” and “so awful that it should be banned by law”.

Excursion on multiple local versions and impressions:
An interesting effect is that children in different countries can watch the same movies and come away with different impressions, learn different lyrics, remember different voices, etc. Ditto, m.m., books, comics, etc. To stick with the Ducks, we have a good example of an odd effect in that “Uncle Donald” and “Uncle Scrooge” are turned into “Farbror Kalle” resp. “Farbror Joakim” in Swedish, implying a relationship of “father’s brother”, while the true relationship is “mother’s brother” (“morbror”).* There might even be a distorting effect on memory: with “The Wind in the Willows” my memory was of a much more river-centric book than proved to be the case during an adult nostalgia reading. (Some children’s books are reasonably enjoyable even for adults. Unfortunately, this was not the case here.) Sadly, the same can apply to adults in countries like Germany, e.g. through lines from Hollywood movies that are considered iconic in their German translation.

*To my recollection, the need for a translation manifested before the exact relationship had been established.

Excursion on German Disney names:
The Germans have often stuck closer to the English originals for various ducks, mice, and whatnot. However, when they do deviate, the result is often quite poor. Compare the English and Swedish femme fatale Magica de [Spell/Hex] with the (namewise) boring Gundel Gaukeley—a name suitable for a pullover-wearing school teacher. Scrooge keeps a part of his English name through a “Duck”, but loses the “Mc” and the Scottish connection.* His given name is replaced with “Dagobert”, which loses the Dickens connection, but this might be forgivable, as Dickens is much less read in Germany. On the downside, there might now be an injected French (!) connection and there is no obvious relation to money. Something like “Fugger McDuck”** or “Jakob McDuck” (note Jakob Fugger) would have seemed a more natural solution. (To make matters more complicated, “Dagobert” is also the Swedish name for Dagwood of “Blondie”, and he is likely what most Swedes imagine when they encounter this quite rare name.)

*Of course, “Mc” is sufficiently well known as Scottish even in Germany that this is a loss.

**But would have been hilarious/inappropriate if brought back to an English context.

Written by michaeleriksson

January 26, 2023 at 1:03 am

Observations around recent writing(s)

leave a comment »

A few random observations around my recent writing and writings:

  1. It is easy for some formulation, some word, some approach, whatnot, to gain something almost formulaic and to become detached from its original purpose, original meaning, or whatever might apply.

    As I have noted in the past, “e.g.” has in my eyes come to be closer to a mark of punctuation than to an abbreviation—to the point that I have considered formalizing this by introducing some own sign to fill the same role in slightly less space. I have abstained for the simple reason that such a sign would require constant explanation to new readers, which makes it highly suboptimal for the blog (and similar) format(s). (While it might work reasonably well in a book, provided that it is the only, or just one a very few, special signs.) However, a somewhat similar complication applies even to “e.g.”, it self: While any English reader should understand this abbreviation (and shame on him, not me, if he does not), the mental switch that has taken place with me, from abbreviation to quasi-punctuation, cannot reasonably be expected. Where I might then find a repeated use of “e.g.” in one paragraph or even one sentence as unremarkable as the repeated use of commata, someone else might view it as an absurd repetitiveness of formulation—just like I react negatively to Wikipedia articles that contain “also” in every other sentence.* I have increasingly tried to be more varied by substituting a “for instance” here, a “for example” there, and some other formulation in between, but this often feels wasteful and it can remove any claim of an “e.g.” closely followed by an “e.g.” being legitimate, as it now clearly is not punctuation.

    *Notably, articles on actors tend to be filled to the brim with claims like “He also starred in X. After that, he also starred in Y. In 1999, he also starred in Z.”. The best that can be hoped for is that formulations using “also” are alternated with formulations using “then”.

    (Yes, I can write entirely without “e.g.” and its equivalents—as in the above paragraph, where I feared that the use would decrease readability unduly. I can also get by without “however”, “on the one hand”, “firstly”, “whatnot”, and whatnot; and I do realize that they can make a text, in some sense, heavier. However, when I do, the end result is that I feel a loss of precision in bringing my intentions over and/or must use uglier or less flexible formulations.)

    Another example is “excursion”: From an etymological point of view,* the word can be given a very free interpretation; however, I suspect that my own use pushes the border. The reason is similar, in that the word “excursion” came to mean “some lines following the main body of a text, never mind form, length, and content” to me. Here, too, I occasionally try to be more precise, e.g. by marking a disclaimer with “disclaimer” instead of “excursion”, but I am not very consistent.

    *The Latin root would amount to a “running out”, with English meanings including various trips and side-trips (as well as metaphorical such in texts), and with German/Swedish near-calques (“Ausflucht”/“utflykt”) that have been known to include picnics.

  2. Beginning in December (2022; currently, January 2023), I have increasingly tried a policy of “write the text at once and add no new backlog entries”. This has worked reasonably well, but not perfectly. A particular problem is what to do when I am writing one text and am met with the idea for another. The repeated misjudgment that “I can temporarily suspend the writing of longer text A to get shorter text B out of the way”, has usually resulted in text B being as long as or longer than text A, and comes with the risk that I have an idea for a text C while writing text B… I have certainly not had enough time to get rid of older backlog entries and my backlog has still grown somewhat. (But this must be seen in the context of a lower text count for January, to date, than in the previous months.)
  3. A partial reason for this lower text count is my recent illness. The illness was quite brief, but it resulted in a prolonged sleep deficit. Too often, in the days since, I have simply lacked the energy and the alertness of mind to write (read, or doing anything else more constructive) resp. write a text of a greater length/importance/effort/whatnot.

    Other reasons include that “fed up” has often won in a contest between a wish to finish more texts and my being a bit fed up with writing.

  4. Over time, I have found my own style of writing growing more complex, unless I consciously counter it to deliberately strive for simpler words and a simpler sentence structure. I have not quite descended to the level of, say, Oswald Spengler, whom I have explicitly criticized in the past ([1]), but I do find myself drifting in a similar direction. While I consider this unfortunate, it raises the question whether such convoluted prose is a personal failing and/or an attempt to “sound smart”, which I have long assumed, or whether it might be an unfortunate side-effect of too much writing and/or too much reading of intellectual or “intellectual” authors.

Written by michaeleriksson

January 17, 2023 at 11:48 pm

Posted in Uncategorized

Tagged with , , , ,