Michael Eriksson's Blog

A Swede in Germany

Archive for October 2018

Bad at math/Follow-up: College material

with one comment

A topic with some overlap with my recent text on “college material” is math ability and its interpretation: The world is apparently filled with people who are (a) highly intelligent, (b) have a weak spot specifically for math, even to the point of struggling with the principles of fractions.

The sad truth is that these people are almost* certainly not intelligent—they merely believe that they are, because the material they encounter in other fields requires too little thinking to learn, or to get a good school grade, for an intelligence deficit to become obvious. If someone is taxed by understanding** something as basic as fractions, elementary trigonometry, or high-school algebra, this points to serious limitations—even in the face of e.g. a later bachelor*** in a soft field.

*Exceptions might exists, possibly relating to some neurological condition; however, if they do, they are likely rare and I am not aware of any example in my personal experiences. There have been some cases of someone using the “I am intelligent, just weak at math” claim—all of which have been fairly stupid.

**As opposed to memorizing some rules about how to use fractions—those with an understanding can derive the rules when they need to. Further, as opposed to just finding math boring and not bothering to put in the effort. (Here a part of the problem with other fields might be found: Understanding can be quite important in these fields too, but is often entirely unnecessary to pass the grade or to create the self-impression of having mastered the topic, implying that a lack of understanding is not punished and that the student might not be aware of his lack of understanding.)

***Indeed, a disturbingly large proportion of the population seems to jump to the conclusion that anyone with a bachelor is intelligent—irrespective of field, grades, effort needed, and how much was actually understood (cf. the previous footnote).

I once heard the claim (and I would tend to agree) that we all have a point where math becomes “too hard”—the difference lying in the when and where. Comparing fractions with some of the math I encountered as a graduate student is like comparing splashing about on a flotation device with elite swimming—to fail at the former is a disaster. (And note that there are further levels yet above what I encountered even at the graduate level—just like not all elite swimmers are Olympians, not all Olympians win a gold, and not all Olympic winners are Michael Phelps.)

Generally, the impression of math created in school does not have much to do with true math: Math is not about knowing or being able to calculate that 13 + 25 = 38. It is about things like being able to reason, spot a flaw in an argument, find an overlooked special case, solve problems, come up with creative solutions, think abstractly, abstract the specific and find the specific in the abstract, see similarities and differences, … While there might be some room for having more or less math-specific talent (and definitely interest) for two people who are equally good at these skills, the skills are quite generic and translate into any number of other areas, including everyday life. Indeed, I would not trust anyone unable to understand fractions with any decision of importance or in an even semi-important role—not because understanding fractions is vital, but because the inability points to more general deficits.

Using math as a proxy for being “college material” is a plausible sounding idea—and it has the advantage over “[be] able to consistently learn through a mixture of reading and own thinking” (my suggestion in the original post) that it is easier to test in advance. However, on an abstract level, it has similar disadvantages to those of an I.Q.* cut-off, while my suggestion automatically takes care of aspects like differing difficulties of various fields. Of course, more practically, the “test in advance” aspect is quite important—which explains why e.g. the vanilla SATs have a math section and not a chemistry, history, or whatnot section.

*Not only are math ability and I.Q. fairly strongly correlated, but they are both arguably proxies for the same thing(s) in the context of being college material.

Excursion on the benefit of being pushed to struggle and revealed to be wrong:
An incidental benefit of studying math is that the student has a greater opportunity to learn both humility and his own limits. Math requires thinking, can push us to the border of what our brains can understand, and the only way to escape being provably wrong, again and again, is to be superhumanly good. In the social sciences, it is possible to go through a college education and an ensuing academic career without the same exposure to “I do not understand” (cf. above) and “I was provably wrong”* (either because the actual tests are missing or because there are loopholes when the tests go the other way).

*Note that I speak of opinions based on faulty thought, not e.g. faulty memory: There are many things (e.g. the year of Napoleon’s death) that are recorded as (more or less) fix truths, which might be misremembered and the memory verified as incompatible with the accepted record. A simple memory error says relatively little about someone, however, and being exposed to a memory error is unlikely to bring humility. In contrast, an elaborate hypothesis involving Napoleon and the Illuminati might be impossible to actually disprove, even when others consider it patently absurd.

Advertisements

Written by michaeleriksson

October 31, 2018 at 8:54 am

A few thoughts around prose

with one comment

What makes good or aesthetic prose and good (writing) style has been on my mind lately. A few particular issues:

  1. I am not convinced that these matters are that important: Language is mostly a vehicle for something else*. By analogy, whether someone watches the same movie in VGA and mono or Ultra-HD and surround sound can make a difference—but it is far more important what movie is watched. If someone can get the point across with mediocre writing—is that not enough?

    *Exactly what depends on the work. Examples include a set of facts, a line of reasoning, a character portrait, a realistic depiction of life, a series of action scenes, a feeling of horror, …

    Some* authors have such an ability to write beautiful prose that it enhances the enjoyment of the text; however, they are a small minority and most best-selling** authors are fairly weak in this regard.

    *I can e.g. recall being highly impressed by some of Goethe’s and Thomas Mann’s works. Unsurprisingly, such authors often have a background heavy in poetry; surprisingly, they have often been German, possibly because the apparent unwieldiness of the German language has led to a compensating increase in skill. Shakespeare is an obvious English example, to the degree that his plays are considered prose.

    **This could partially be explained by the typically commercial and/or “low brow” character of best-selling material. However, (a) the basic principle of language as a vehicle applies even to “high literature”, (b) there are plenty of examples of high literature in unremarkable or even poor prose. (The German of Kafka, e.g., is in parts horrendous, yet he remains in high esteem.) The success of great literature when translated into other languages is a further argument, seeing that now the skills of the translator and the many obstacles to translation are of similar importance to the (prose) skills of the actual author.

    In terms of style, some limits* must be set, especially regarding clarity and (to some degree) conciseness. However, the limits needed for a reasonable vehicle are not all that high (assuming that grammatical correctness has been reached), and any intelligent college graduate should already have the skills to exceed them.

    *There are many writers, including a disturbing proportion of bloggers, journalists, and Wikipedia editors, who are so awful that they should better not write at all.

  2. Verbosity* is a tricky issue. (And, in as far as it is negative, I am unusually poorly suited to throw the first stone.)

    *Here this word should be taken in a very wide sense, covering not just “needless words”, but also e.g. the inclusion of details of little importance, roundabout descriptions, unnecessary dialogue, … (No better generic term occurs to me.) Indeed, my focus below largely leaves the topics of prose and style, to focus on something more general.

    On the negative side, works like Pride and Prejudice show how verbosity can be taken too far, e.g. through turning the joy of reading into boredom or unduly increasing the time needed to read a work. Generally, text that does not serve a clear purpose, e.g. moving the story forward or giving nuance to a character, is often a negative and amounts to unnecessary filler. A good analogy is the low tempo and low content shown in many independent, low-budget, whatnot, movies—including those that begin with someone driving a car in silence for several minutes, then parking in silence, then walking to something in silence, with the first significant words uttered/events happening after five or more minutes. It would be better to condense the little information present* to a fraction of the time and just make the movie a little shorter—boring and artistic are not the same thing. Another analogy and partial example is the use of unnecessary adjectives and blurb in advertising language, as discussed in an older text on idiocies of ad writing (to which I might added the blanket advice to cut out any and all adjectives from an advertising text).

    *E.g. that a strategically placed photograph hints that the driver is married with two children, without the need to explicitly mention the fact—something that takes seconds, not minutes, to bring across. If worst comes to worst, doing a “Star Wars”-style introduction and skipping the car ride entirely would be the lesser evil… (Notably, if these car rides and whatnot are intended to serve another purpose, e.g. building atmosphere or tension, they usually fail equally badly at that. If they could pull it off, by all means—but it appears that they cannot.)

    On the positive side, it is often the small additional details that add charm to a work, that prevent it from being just a string of events, that give a marginal character that extra dash of individuality, etc. I have made some minor experiments with cutting out everything (apparently) non-essential from a text, and the result is so sterile and uninteresting that it makes a TV manuscript* a good read in comparison. The lesson is that, while any individual item that appears non-essential might actually be non-essential, removing too much kills the work.** While there is a point of “too much”, most amateurs are likely to fail clearly on the side of too little.*** There are even cases when something with no apparent major bearing on the overall plot/theme/whatnot cannot be cut without damaging the whole—consider e.g. “The Lord of the Rings” and the many detours and side-adventures. (Sometimes the road is more important than the apparent destination.) As a counter-point, I have usually found Stephen King more interesting as a short-story writer than as a novelist: While his ability to paint interesting portraits, give color to situations, find interesting developments, whatnot, might be his greatest strength, he often pushes it too far in his novels—and cutting another**** ten or twenty percent would be beneficial. Quality over quantity.

    *A TV manuscript, like most plays, is not intended to be read for entertainment—it is an instruction on how to create the entertainment. The difference might be less extreme than between a recipe and the finished food, but it goes somewhat in the same direction.

    **Also similar to a recipe: This-or-that ingredient might be foregoable entirely, another might only be needed in half the stated proportion, whatnot—individually. Remove/reduce all of them at once…

    ***My contacts with the works of amateurs have been very limited since I left school, but these contacts, my recollections from my school years, and my own preliminary dabblings with fiction all point in this direction. Indeed, it could be argued that this is the failure of the aforementioned independent movies, e.g. in that the car ride could have remained, had it been sufficiently filled with something interesting (and preferably relevant to plot, characters, whatnot).

    ****According to “On Writing”, he tries to cut ten percent from the first to the second draft.

    From another positive point of view, reality has details, and fiction with too little detail is unlikely to be realistic: Go for a walk in the forest and there will not just be trees around—there might not be a pack of wolves, but a squirrel, a few birds, and any number of insects is par for the course. (And a tree is not just a tall brown thing with small green things on it.) Take a train-ride and there will almost certainly be some unexpected event, even be it something as trivial as being asked for the time or someone falling over. Etc. Sometimes such details do more harm than good; sometimes they are exactly what is needed. (Do not ask me when: I am very far from having developed the detail judgment.)

    The trick is likely a mixture of finding the right middle ground and gaining a feel for which “extras” are merely unnecessary filler and which actually bring value to the text—add color, but do not lose tempo. Chances are that the drives for detail and relevance can be combined, e.g. in that an event written just for color is re-written to actually tell us something about the character(s).

  3. The use of various connecting words and “preambles”* is an aspect of my own (non-fiction) writing that has long left me ambivalent: On the one hand, they do serve a deliberate, connecting purpose that enhances the text in some regards; on the other, I am often left with the feeling of a lack of “smoothness” and of too many words that only have an auxiliary character—or even the fear that I would be annoyed when encountering such an amount in texts by others.

    *E.g. “However, […]”, “To boot, […]”, “Notably, […]”, “On the other hand, […]”, etc.

    Looking at almost all texts that I read, including by successful fiction writers, such words are used far less often, and much more of the job of making connections is left to the reader (who, judging by myself, is only very rarely impeded). My background in software development (where the text given to the computer should leave as little room for ambiguity as possible) makes me loath to change my habit, but chances are that I do take it too far even in non-fiction context—and in a fiction context, this habit could be deadly.

  4. I am often troubled by (and some of the previous item goes back to) the limited mechanisms for formal clues concerning the syntactic/semantic/whatnot groupings and intentions of a text. A recurring sub-issue is the use of commata, the comma being used in a great number of roles* in writing, which often forces me to deliberately hold back on my use, lest my texts be littered with them.

    *Including e.g. as a list separator, as a separator of main and subordinate clause, as an indicator of parenthesis, … The situation is made worse in my case, because different languages have different rules, and I am underway in three different languages. (For instance, according to English rules, a text might correctly include “the horse that won”. According to German rules, this would be “the horse, that won”. Also note the contrast to the English “the horse, which won”, with a slightly different meaning.)

    For instance, if we consider a sentence like “the brown horse ran fast and won by a large margin” there is a considerable amount of “parsing” left to the reader—and parsing that largely hinges on knowing what various words mean/can mean in context*. Grouping the individual words by structure, we might end up with “(the (brown horse)) ((ran fast) and (won (by (a (large margin))))”—while a sentence like “the horse and the mule […]” would result in the very different “((the horse) and (the mule)) […]”, giving some indication of how tricky the interpretation is.** (And such a mere grouping is far from a complete analysis—in fact, I relied on previous analysis, e.g. the identification of “horse” as a noun and “won” as a verb, when performing it.)

    *Not all words have a unique interpretation. Consider e.g. garden-path sentences or absurdities like “Buffalo buffalo buffalo buffalo.”, which actually is a complete sentence with an intended meaning.

    **Humans rarely notice this, unless they are learning a new language or the sentence is unusually tricky, because these steps take place unconsciously.

    Fields like linguistics and computer science approach such problems through use of very different representations, notably tree structures, that are capable of removing related issues of ambiguity, needing* to know what every word means, etc., and I often wish that everyday language would use some similar type of representation. As is, I stretch the boundaries of what language allows to express my intention—to the point that often I catch myself using “e.g.” and “i.e.” more as interpunctuation (in an extended sense) than as formulation. (Which explains my arguable overuse: In my own mind, they register more like a comma or a semi-colon does, than like “for instance”.)

    *With some reservations for words recognizable in their role through various hints or context. The classic example in English is the “-ly” suffix as a (far from perfect) hint that a word is an adverb.

  5. As an interesting special case of the previous item, the use of commata and semi-colons is often contra-intuitive: If we view the comma (“,”), semi-colon (“;”), colon (”:”) and full-stop (“.”) as differently strong “stops”,* which is common and has some historical justification, then a sentence like “I found red, green, and yellow apples” simply does not make sense. We might argue that the separation of “red”, “green”, and “yellow” is warranted; however, at the same time we want them to be (individually or collectively) attached to both “I found” and “apples”—which is simply not the case if the commata are viewed as stops.**

    *Here we see another case of characters doing double duty: Among the multiple roles of quotation marks we have both the signification of a literal string and of something metaphorical or approximate. Different signs for these roles, the role as an actual quote, the “scare quote” role, and whatever else might apply, would be neat. (Then again, most people would likely be over-challenged with such a system, and it would degenerate back into something less differentiated—a problem that might kill quite a few potential improvements.)

    **But note that this problem disappears with appropriate grouping, like “I found (red, green, and yellow) apples.”, which would be one way out. A better way, disconnected from the interpretation as stops, is to see the sentence as an abbreviation of the cumbersome “I found read apples and I found green apples and I found yellow apples”.

    In some cases, the problem could be limited by the prior introduction of a stronger stop*; however, this would often lead to awkward results and/or be incompatible with established use. For instance, “examples of apple colors: red, green, yellow” would be OK (in a context where this is stylistically tolerable), but “examples of apples colors are: red, green, yellow” is extremely odd. This solution is similar awkward for the original example (“I found: red, green, and yellow apples”) and leaves the original problem unsolved—“I found” is now offset, but “apples” is not. We might get by with “I found: red, green, and yellow: apples”, but this would be entirely unprecedented, hard to combine with any current interpretation of “:”, and better solved (assuming that an extension is suggested) by use of one of the bracket types**.

    *Note that the examples provided are somewhat different when “:” is viewed as a stop and when viewed as a “list introductor” or similar.

    **For instance, the scripting language Bash uses “{}” for a similar effect: The command “echo 1{a,b,c}2” results in the output “1a2 1b2 1c2”. (However, “()”, “[]”, and “<>” would be equally conceivable. Other bracket types exist, but would be problematic with current keyboards.)

Excursion on “Catch-22”:
A draft extended the mention of Kafka with “Joseph Heller, whose ‘Catch-22’ I am currently reading, appears to be a similar English example”. If this book is considered “high literature”, it is indeed a good example; however, I am highly skeptical to this classification: Apart from a few good laughs and the eponymous “catch”, the first hundred-or-so pages has had very little to offer of anything—and give the impression that the author has just sat down by the keyboard, written down whatever occurred to him in the moment, and then sent the resulting draft to be published. There are, incidentally, some Kafkaesque setups, but I would recommend Kafka, himself, to those looking for the Kafkaesque. It might be that the book makes more sense to someone who has lived in a similar setting or it might be that the remainder is better; however, my current feel is that this is yet another book that has gained its reputation due to popularity—not literary quality.

Written by michaeleriksson

October 29, 2018 at 10:40 am

Posted in Uncategorized

Tagged with , , ,

Of Mice and Computer Users

with one comment

There are days when I can barely suppress the suspicion that life is a weird cosmic joke, “Truman Show”, or scientific experiment that replaces mice in a labyrinth with humans in a Matrix.

Today is one of those days: I had cleared almost every task of my schedule in order to dedicate myself to the tax declaration for 2017—after having postponed it again and again for the last two weeks and knowing that it would likely leave me in too poor a mood to risk anything else that could aggravate me. (Cf. a number of earlier texts, e.g. [1].)

Then a server crashed, which I had had running for several weeks without problems. I rebooted it, started things up again, and decided to do some minor clean-up while still logged in. (Should have stuck to the plan…) In the process, I (very, very unusually) managed to screw-up a command, leading to all the files in the user account being moved. While I discovered this immediately after submitting the command, I could not interrupt it, because my session froze… After a minute-or-so of waiting, I forced a new reboot.

Less than thrilled, I proceeded to clean up the damage (and fortunately, no irrecoverable damage took place)—only to now have my notebook complain about a CPU being stuck and eventually freezing, forcing a reboot of the notebook… Only yesterday, I had noted an up-time of roughly sixty days—today, right in this already annoying situation, it fails! Worse, after the reboot, after I have got everything* back up again, the notebook just crashes. Roughly sixty days without problems and then two forced reboots in twenty minutes. Worse yet, I next decided to use the latest installed kernel,** seeing that I trailed heavily in version, and that newer kernels are usually better—and found myself needing yet another reboot within five minutes…

*With a number of different user accounts, different encryption passwords, and whatnot, this takes a lot more time for me than for the average user. Normally, this is not a problem, because I only need to reboot every few months. When I have multiple reboots in a single day, the situation is very, very different.

**At some point, the newest release became unstable with my notebook, and I set up my boot-loader to use an older, stable kernel per default. However, that was at least six months ago, running an older kernel is a potential security risk, and I had hoped that the current newest release would have resolved these problems in the interim. Unfortunately, this was not the case. Switching to a newer kernel in the above situation was, admittedly and with hindsight, pushing my luck; however, if I had not tried it today, the next “natural” opportunity might have been another sixty days into the future. (Indeed, going by my existing plan, I should have switched kernels already for the first reboot, but simply did not remember to do so until the third attempt.)

As of now, I have an up-time of a little more than four hours (back with the old kernel), and hope for another sixty-ish days. However: Half the day has been wasted between the extra efforts and the time needed to restore my mode—and I am not taking the risk of attempting the tax declaration today, lest things end with a notebook that crashes in a more literal sense (say, into the nearest wall).

Written by michaeleriksson

October 25, 2018 at 3:38 pm

College material

with 3 comments

Occasionally, the question of who is and is not college material is relevant to my writings. This is a tricky area, seeing e.g. that different fields differ in how much ability to think and how much ability to memorize is needed—even complications like grade inflation and underwater basket-weaving aside. Approaches like drawing a line at an I.Q. of a given value (e.g. 110 or 115) are too inflexible both in this regard and through neglecting criteria like the willingness to put in the work. (In other words, a certain I.Q. might be a requirement, but is not, alone, sufficient.)

Based on my own observations, I would suggest that a better heuristic is to consider as college material those who are able to consistently learn through a mixture of reading* and own thinking—without needing lectures, detailed** other instruction by professors, TAs, whatnot, or the help of other students. Lectures are there for people who cannot read and/or cannot think for themselves! (See an older text for more information on why lectures are idiotic. Note especially the centuries old Samuel Johnson quote.)

*Typically, appropriate books; however, other types of texts can be relevant, including scientific articles and various ad-hoc texts written for a specific course.

**Needing occasional help, e.g. due to an unclear passage in a book or a rare blind-spot, might be acceptable. Even here, however, the preferred solution should be to spend more time thinking until one “gets it” through own efforts, possibly aided by alternative written sources.

Regrettably, the current trend goes in the other direction, e.g. with Germany increasing the proportion of “mandatory presence” lectures during the Bologna process—college is by now based on the assumption that the average student is not college material, be it by my measure or by an I.Q. measure. Certainly, the school system is neither geared at giving students skills of this type, nor at filtering them by such skills.

In a bigger picture, this measure points to fundamental flaws in the education process, including the wasteful use of professors for holding lectures—contrary to popular opinion, the main tasks of a professor should relate to research and not education. Or consider the point of going to college: For a student with the capability to learn on his own, this point is to get the degree that own studies cannot provide—the other benefits he can gain on his own. Why not reshape colleges to focus on independent learning with opportunities to just have knowledge and understanding tested?*

*Seen as a non-rhetorical question, answers like “Because we would be hard-pressed to charge an arm and a leg per year just for a testing opportunity!” arise.

Written by michaeleriksson

October 25, 2018 at 2:27 pm

Thoughts around social class: Addendum Part I

leave a comment »

Re-reading Thoughts around social class: Part I, I notice two (or three) points that benefit from expansion:

Firstly, I discussed socio-economic status just in terms of income and education, forgetting that profession/job/whatnot is normally a separate third leg.* I suspect that this third leg is not that important to my discussion, having less practical potential effects and, indeed, being more a matter of status for most people (after adjusting for income and education as separate factors). However, for the sake of completeness, this third leg goes the same way as the other two in my anecdotal examples: Contrasting me and my sister, I worked in various qualified positions in software development, including several variations of developer** (often as “senior”), architect, business analyst, and consultant, while she has spent a significant part of her life unemployed and (if I understood my step-father correctly) has finally found work as a personal-care assistant—with the same parents, we differ considerably on all three legs. My father’s mother was a nurse***, while my mother’s mother was some type of hospital orderly, which puts them in the approximate same area of work, but at different levels of competence, of status, and in the hierarchy; my father’s father was a teacher**** and even substitute principal, while my mother’s father was an ambulance driver*****—with parents differing on all three legs, my parents landed on roughly the same level.

*Which is not to say that these three legs are necessarily a universal definition. The concept is inherently ambiguous.

**There is a lot of title confusion in the world of IT, so take the title with a grain of salt. For instance, I once, switching employers, went from being a “software engineer” to being a “software developer”, with virtually no change in my actual work.

***Due to the difference in country and time, I am uncertain how her role compared in detail to that of a modern nurse with a certain qualification, e.g. a U.S. “registered nurse”. However, she had the title (“sjuksköterska”) and the formal education of the day to go with it. Also: Bear in mind that the career paths available to women of her (born 1914) generation were more restricted than today, implying that being a nurse was close to the ceiling for a woman in medicine. (Whereas a nurse of either sex, today, is implicitly someone short of being a physician.)

****Here too, the profession was more prestigious than today, albeit for other reasons than with women and nursing.

*****Had he been working today, he would probably have been qualified and classified as some type of EMT; however, in my understanding, these roles were not very developed at the time and the actual “loading” of patients and driving of the ambulance were the core tasks. It should be added, however, that he was active both with the Salvation Army and some type of union work (I am unaware of the details), appears to have been highly regarded in both roles, and might have scored well on a “fourth leg”.

Secondly, in my excursion on children, I discuss the degree of assistance that is appropriate. The topic of education is not relevant to that discussion; however, without mentioning education, the text is potentially misleading: An important overall theme is a reasonable degree of equality of opportunity and a high degree of social mobility. A wide availability of reasonably priced and reasonably high-quality education is vital to this—anyone with the right brain* should be able to get whatever level of education he desires. This could require additional measures, e.g. free or cheap state** schools of various kind, subsidized student loans, encouragement of scholarships, and similar.***

*This is an important restriction: Common ideas like that everyone needs more education, that anyone with the right degree can do the job well, that it is college that creates the great mind, whatnot are highly misguided. It would be in the best interest of both society and the individual to reduce the college-going proportion of the population, restore the quality of the education, and make a diploma the type of proof of ability that it should be. Similarly, chances are that e.g. the “no child left behind” attitude has done more harm than good to the overall school system, trying to force an impossible improvement on the untalented and reducing opportunities for the talented in the process.

**Private institutions must be allowed to set their own prices and admission criteria. This will cause some remaining inequality of opportunity, e.g. in that the rich can afford to pay for Harvard and the poor cannot. Still, this is far less negative than a situation in which only the rich can afford college at all. (And must be put in relation to the rights of the private colleges and the people behind it.) Further, without the right brain, money is not enough. (Of course, a high-reputation college that admits and graduates students mostly based on money is not inconceivable—but how long would its reputation remain high?)

***Assuming that we work within something resembling the current system. I am very open to changes, and like to note that education already is available at a low cost even in the U.S.—the diploma is the expensive part… Some restriction on type of education might be sensible, e.g. in that studies for professional qualification are subsidized, whereas other studies are not, seeing that the former (a) are more important for equality of opportunity, (b) bring more value to society; while the latter is more of a personal satisfaction/development/whatnot issue. The latter does not require a diploma and can be taken care of outside of college. Indeed, my own “extra-mural” studies would easily cover a (sufficiently tailored) B.A. in “liberal arts”/“general studies”. (However, more detailed thought on the restrictions might be necessary, both with an eye on those who target an academic career and the difficulty of judging what education has what benefit. For instance, I have heard claimed that English is a better major than journalism for those who want to be journalists, despite the difference in professional orientation.)

Thirdly, parenthetically, a more explicit comparison between my parents might be beneficial. However, due to the great differences in choices and developments, going beyond “roughly the same level” is tricky. The one is an orange and the other an apple—but neither one is a grape or a melon.

Excursion on the changing status of professions:
Re-reading the early footnotes, I am struck with the change of status of professions (over-lapping with one sub-topic I intend to include later). My aforementioned move from “software engineer” to “software developer” is coincidental in this regard, but it does illustrate an on-going devaluation of software development: With the great need for developers, too many incompetents have been let in, and the idea of a software engineer seems to have gone down the drain, be it with regard to status, qualifications, or approach. Following current trends, I would not be surprised to see the profession move to a similarly low status position as teaching within one or two decades—this especially as teaching still tends to be a regulated profession, while software development is not. (The other way around would have been better…)

Remark on the rest of this series:
I suspect that there will be some delay with the remaining parts, because I have problems finding a reasonable structure for what I want to say—to the point that I cannot even tell whether there will be two, three, or four parts in all…

Written by michaeleriksson

October 23, 2018 at 5:14 am

Overlooked explorations of the male role, etc.

leave a comment »

After my recent review of “Pride and Prejudice”, I have spent some time thinking on actually and apparently simplistic literature vs. something that has long annoyed me immensely: Common claims from (almost invariably female) “gender theorists” and their ilk that men would spend too little time analyzing the “male role”, that questions of “manhood” or “masculinity” would not be sufficiently explored, and similar. (While the same, apparently, does not apply to women—presumably, courtesy of the same “gender theorists”.)

These claims show a gross ignorance of the type of influences those men who are interested in fiction are exposed to since childhood—and the considerable efforts, conscious or not, spent exploring such topics in fiction, since long before “gender studies” arose as a social construct*. To boot, it severely underestimates the amount of time many men spend privately contemplating related issues, let alone the apparently universal** male question of when one ceases to be boy and becomes a man.

*Two can play that game…

**With reservations for societies where some type of initiation ritual is involved, as well as sub-cultures where it is tied to first having sexual intercourse. (Going by my own experiences, I suspect that the question is raised so commonly mostly because the process is gradual, both in one’s own eyes and in the eyes of the surroundings.)

Take “The Lord of the Rings” and consider the wide variety of characters, character developments, and situations: Take as positive examples Frodo and his heroic march; Sam and his undying loyalty; Merry and Pippin, and the sacrifices they make for friendship; and how all four grew to become so much stronger than they originally were (or proved themselves to be vs. thought that they were). Take as negative examples Boromir committing evil* in an effort to do good; Saruman being corrupted by a wish for power; Theoden falling prey to his personal Iago; or even Frodo, unable to give up the Ring during the deciding moment. (With many other examples to be found.) There are (mentally/morally/whatnot) small men and great men, there are small men growing, there are great men shrinking. There are dilemmas and decisions. There is heroism and cowardliness. There are good ends and means; and there are bad ends and means—even intermingled (cf. Boromir). A particular point of note is the epilogue in the Shire—unlike in so many other stories, defeating the main evil does not ensure that the world is safe and sound, and the work still goes on. (Incidentally, while the text is dominated by male characters, the few women that do occur are by no means house-wives focused on child-rearing. Most notably, Galadriel is a ruling queen, is one of the most powerful beings that appear in the story, and appears to wear the pants in her own family; while Eowyn disguises as a man, rides to battle, and slays one of Sauron’s greatest champions—both much worthier examples** than any of the female characters in “Pride and Prejudice”.)

*And from another perspective, we have the ethical dilemma of when what actions are justifiable, and the opportunity to consider ourselves in different situations (also see another recent text.) Unlike many other instances of evil being done in the name of good (or “the greater good”, as case may have it), the attempted evil was, on the surface, small and the situation one involving the fate of the world, making his actions easier to understand. (The more severe flaw was, likely, that he failed to comprehend the nature of the Ring, and that things would have ended much worse, had he been successful, than they actually did. My last reading being too far back, I do not recall the degree to which his actions were caused by an active influence by the Ring. The interpretation of these actions might need some corresponding adjustment.) Similar concerns about motivations and what-would-the-reader-do-in-the-same-situation apply in other cases too.

**I caught myself originally writing “examples for a young woman”. I immediately stopped to change this, although not unreasonable in this specific context: While their might be some areas where the sex of an example or role-model is relevant, it is almost always better to focus on the admirable characteristics. The feminist insistence that young women be given female role-models for this-and-that is highly misguided and contra-productive. If we want a role-model, we should pick someone suitable in a manner that ignores both our own and the role-models sex (and color, religion, nationality, whatnot).

Take “Hamlet”; take the “Iliad”; take “Le Morte D’Arthur”; take any number of other works by a great number of authors, even (particularly?) in the fantasy and sci-fi genres; take, even, the lives and adventures of Spiderman and the Hulk, in those despised super-hero comics, those heights of male “immaturity”. To a thinking mind, the right work can raise more questions around what it is and takes to be a man, how to be good, what dilemmas and problems can arise in life, whatnot, than the field of “gender studies” does (even discounting problems like ideological bias within that field). Moreover, in my impression, they do so to a far higher degree than does, m.m., the corresponding age-group literature for women, as demonstrated by e.g. “Pride and Prejudice”.*

*I must make the great reservation that I am not overly well-read in this area; however, what works I have read/watched with a similarly “for girls/women” image (as e.g. “The Lord of the Rings” has a “for boys/men” image), have usually fallen similarly short as “Pride and Prejudice”—with questions like “Who gets whom?”, “Does he love me?” (or even “Do I love him?”), “Which of my two suitors should I pick?”, “Do I dare to have that chocolate bar?”, “Should I remain friends with that other woman, even though she is a horrible person?”, and similar shallowness. While some of these questions might, on a personal level, be important, they do not contribute much to personal growth, to developing a sense of ethics, to gaining insights, whatnot. (Note the difference between works written for women and works written by women—the latter can be quite insightful.)

These works often (similar to “Pride and Prejudice”) work with shallower and more unnuanced characters, proving that this, in and by it self, need not be a problem. However, where “Pride and Prejudice” gives the impression of either lack of insight or lack of effort (which, I will not presume to judge), they often do so for deliberate reasons, in order to e.g. make a point more obvious or to be allegorical.* (Also note that my complaint against “Pride and Prejudice” was not lack of character depth, per se, but the compounded lack of almost everything, character depth included.) More generally, many works of fiction can be quite thought-worthy despite having a reputation that goes more towards entertainment literature. For instance, many with only a fleeting familiarity see Terry Pratchett as just a humorist (he was much more); for instance, many see the “Narnia” books as just children’s literature (they have insight even for the adult reader and can be read on several levels). Also see an excursion in the aforementioned review.

*However, many, especially for younger readers, can take this to a point that important insights are lost, most notably the realization that the bad guys usually consider themselves to be the good guys.

Interestingly, questions like those discussed above do not necessarily have any stronger connection with being-a-man-as-opposed-to-a-woman*. Instead, they center on being-a-man-as-opposed-to-a-boy, or, more generically, an-adult-as-opposed-to-a-child; or forego such divisions entirely to focus on e.g. what is right, with no restrictions on who is concerned (being-good-as-opposed-to-bad**, to stick to the pattern). If then, a criticism against one of the sexes should be extended, it would be better directed at women*** for not paying enough attention to the child–adult (or good–bad) division and favoring the female–male division. To some degree, a man is a plain vanilla adult, making issues like a (specifically) male role largely uninteresting; while a woman is a strawberry adult with a scope of cream, chocolate flakes, and a cherry on top, making an investigation of a female role more understandable. (And while I have no objection to women being strawberry instead of vanilla, do they really need all those extras?)

*However, some do, at least in public perception, e.g. in that the demands on a man to take responsibility are larger, ditto to be a provider or protector, ditto to, in a life-or-death situation, give his life to protect his wife’s, etc. Apart from these being unlikely to cause dissatisfaction among feminists, they are also usually of a type that does not require an adjustment of the male self-image or whatnot—if anything, they suggest that women should step up more, that society should to put larger demands on women, and/or that women should revise their image of men.

**I use “bad” over “evil” for two reasons: Firstly, it is not necessarily a matter of e.g. ethics or consequences for others, it can also be a matter of e.g. capabilities and consequences for one self. Secondly, even when ethics is concerned, “evil” might push the contrast too far. For instance, in the parable of the good Samaritan, do we really wish to call those just walking by “evil”? Indeed, even “bad” might be too strong a word in at least some contexts.

***Or at least the type of women who tend to be found in areas like “gender studies” and feminism. Still, in my personal impression to date, women often see “being an adult” as the equivalent of “having a family”—while a man might be more focused on “carrying responsibility” or “doing the right thing”.

But here we might have the crux: These efforts deal with topics like right and wrong, good and evil, positive and negative behavior and developments, human strengths and weakness; often contrasting or putting in conflict egoism and altruism, loyalty towards two different things (say, a brother and country), duty and safety/comfortability, whatnot. What they do not do, is ask questions like “Should I wear a skirt to work?”—and why should they? That is a small and mostly irrelevant question, starting with the low probability that a man would want to do so. (The reverse questions around some women can have a greater value, e.g. to move them towards more practical clothing, but are still not truly important.)

True, in the area between these extremes, there are questions that might be worthy of some exploration (and do not obviously fit in the context of an epic fantasy adventure). For instance, we might consider “Is it unmanly to be a stay-at-home dad?”: It could be argued that someone who avoids that role for that reason is lacking in maturity. On the other hand, this constellation is not very common, with more common reasons including a greater drive to accomplish something professionally and a lesser tolerance of children. A typical intelligent and educated man will not fear what his blokes in the pub will say,* but he will have concerns like loosing ground in his career**, earning less, being bored by a less intellectual type of work, being driven up the wall after spending the whole day, week in and week out, with his children,*** etc. In contrast, here duty can come in, and a man who unexpectedly finds himself a single parent, might very well stay at home out of a sense of duty. His friends might give him a minor ribbing, but they would hardly think less of him—they would see a man doing something manly (viz. doing his duty by his children).

*A recurring issue is that “gender theorists” and feminists present a very stereotypical, prejudiced, and often outright incorrect image of men, e.g. through ignoring individual variation and over-focusing on sit-com “proles”—if men are painted as Al Bundy, then we should equally paint women as Peg Bundy. Similarly, if we do not look at the people with some modicum of intelligence, there is no point in discussing the matter: Stupid people will, barring a revolutionary medical break-through, remain stupid, no matter how many treatises are written on their behavior—and if we look at the behavior of stupid women, they are certainly not something for the female sex to be proud of.

**But is not a career drive also something to analyze/problematize/deconstruct/…? That depends on why the drive is there. Believers in the out-dated “tabula rasa” model of the human mind might jump to the conclusion that a career drive is necessarily something artificial, which explains much of their wish for further investigation (but, obviously, only within their own “everything is a construct” frame-work). However, there are strong signs that such differences are largely caused by biology, making a further investigation a low priority—if in doubt, because this drive is mostly beneficial. A major reason behind the continual failure of various modern feminist, PC, Leftist, whatnot attempts to create equality of outcome is simply that they push past the point where inborn characteristics become a deciding factor—they fail to realize that differences in outcome are not ipso-facto proof of differences in opportunity. (Similar arguments apply to other points above.)

***Note that a love of one’s children is not an obstacle to such irritation.

Written by michaeleriksson

October 19, 2018 at 5:07 am

Appointment with Death: Human memory and a major plot-twist

leave a comment »

I have just re-read Agatha Christie’s “Appointment with Death”, following a first reading some ten to fifteen years ago.

(spoiler alert)

While the book was as interesting and well-crafted as I remembered, my enjoyment was marred by the knowledge, from my first reading, that there actually had not been a murder—but a malicious suicide.

I read on, probably not paying as much attention to details as I should have, and saw Poirot, in his traditional final summary/round-up/interrogation, eliminate suspect after suspect, often explaining the inconsistencies in witness statements by a deliberate attempt to protect someone else (in turn based on the incorrect belief that this someone else was the murderer), leaving us with … a murder.

This leads me to two sub-topics:

Firstly, memory is a fickle thing. Incidents like these make me question how much of e.g. my own life that I (and others of theirs) might remember in a distorted manner, especially with an eye on the occasional inclusion in my writings. They also raise serious concerns about e.g. the reliability of witnesses*, and strengthen my opinions concerning topics like statutes of limitations.

*Professionals have raised such concerns for a long time. For that matter, Christie has been known to use unreliable witnesses, including an easily manipulated old lady in this book.

My suspicion is that a suicide was my main hypothesis for most of the original reading, and that this hypothesis, through a larger exposure, remained in my memory, while the actual culprit faded away. Indeed, I have repeatedly had similar experiences in the past (although none so harmful to a re-reading), e.g. in that I had a clear childhood recollection of the brave and capable hero of “Leiningen Versus the Ants” ultimately succumbing to a swarm of his enemies; but found that he actually survived and defeated the ants, when I re-read the story as an adult. Indeed, during the re-reading, my faulty recollection had me contemplating an interpretation of the story as symbolic of the futility of human plans and efforts against something too powerful (e.g. nature, God, or a greater mass of people)—an interpretation that did not pan out… That misrecollection was similar in that most of Leiningen’s efforts through-out the story had been futile: He had repeatedly temporarily held off the ants with consecutive lines of defense, but each line was ultimately over-come, and the general tendency of failure dominates the story. To boot, the scene with his last desperate run, and the attacks during it, must have been very strong to a child, leaving a correspondingly strong impression.

Secondly, I am left with the feeling that Christie made an error of judgment in what is otherwise the best of her books that I have read.* Not only was this the perfect opportunity for the twist ending of twist endings,** but it would also have fit with both the character of the victim (Mrs Boynton) and the timing of events: Mrs Boynton was portrayed as an extremely malicious and tyrannical woman, who enjoyed keeping her family down. To boot, she was elderly and sickly, with her death not being truly unexpected. To boot, her grasp over the family was cracking, as she had under her thumb an own daughter, three step-children, and the wife of one of the step-sons—and the latter had declared her intention to leave (even at the cost of losing her husband), the husband was contemplating following her, another step-son was enticed to rebellion by a new romantic interest (Sarah), and he and the step-daughter had contemplated murder to free the family…***

*In a guesstimate about a dozen—which is still only a fraction of her overall works.

**In light of the good fit, I make a minor reservation that Christie might have deliberately tried to mislead the reader into thinking suicide, and using the reversal as the twist. With my second reading, I am faced with the problem that I might have failed to see such attempts, already being convinced of the suicide; while my first reading is too long gone by. Even should this be the case, however, I consider the suicide version to be better (with corresponding alterations to remove any too open hints at suicide.)

***Here the question of what Mrs Boynton knew is important, with this question partially hinging on when she died. For instance, she was aware of Sarah, but likely never understood how great her effect was: The step-son in question went to take a stand and break free on the very day of her murder, but she is, towards the end, revealed to have already been dead when he reached her. However, since he pretended towards the others that she was still alive, another interpretation is possible for most of the book, that he did tell her off and that she took this as an impulse to act. The situation with the other step-son is quite similar.

Consider now a scenario in which she has the knowledge that her death is not long due, she is faced with this collapse of her petty dominion, and she sees a final way to spite the family—commit suicide and make sure that one or several of the “steps” go down for murder. (This might also, depending on the wills and laws involved, have moved more of her late husbands fortune onto her biological daughter.)

I am uncertain whether this scenario would have fit well enough given the facts presented prior to the summary, but if not, little would have to be changed. The issues around the syringe(s) need not be a problem, assuming e.g. that she had herself stolen one and deliberately left it at the site of the crime (in order to draw attention to the unnatural death); while one of the family members had later removed it, in an attempt to protect another family member (consistent with behavior actually displayed). Contradictory claims of when she was alive and when she must have been dead might be resolvable through an alleged incident with Mrs Boynton’s watch, which had run out and then been rewound and reset by one of the step-sons—possibly, she deliberately let the watch run out in order to somehow trick him into noting the wrong time from some other source.

The actual culprit and the resolution are unsatisfactory, too sudden, and leave the reader in a position where he would be hard-pressed to reach the right conclusion*: Mrs Boynton had once been a wardress in a prison. The murderer, Lady Westholme, had once been an inmate at said prison, Mrs Boynton had recognized her, and was now intending or threatening to use this knowledge to destroy Lady Westholme’s reputation, political career, whatnot.** Poirot’s conclusion of this hinged on statements made by Mrs Boynton directed at Lady Westholme immediately after being given a speech by Sarah, who was not even aware of Lady Westholme’s presence. (Specifically, statements that she never forgot a face and whatnot.) Re-reading the corresponding passages, I can see Poirot’s point (e.g. direction of gaze, surprising formulation); however, resolving the oddity of the statements in context by assuming that they were directed towards a third party forces the introduction of a greater oddity—she must now have left a severe insult to her self-image go entirely unanswered.*** To boot, the formulation was merely surprising, not implausible, with e .g. “I will never forget you or your insults; and one of these days, I am going to get you” being a reasonable interpretation. (Certainly, the effect on Sarah was considerable, pointing to an odd-but-skillful threat.)

*Poirot’s repeated emphasis of the scene might have been clue enough, but (a) I, specifically, was not paying the attention that a Christie story requires, (b) readers, in general, should not have to rely on meta-information, e.g. what the detective’s suspicions are, in order to reach the right conclusion—the point of a good murder mystery is for the reader to try to find the culprit in competition with the detective, not to be led by him. (I have no recollection of whether I managed to get to the right conclusion during my first reading.)

**How seriously such threats were taken by Lady Westholme is further illustrated by there actually being a suicide in the book: Lady Westholme’s, as she realized that the game was up. Notably, Poirot repeatedly emphasizes that he could not necessarily prove anything, and a conviction was likely far from certain, making a “death before dishonor” scenario likelier than despair over a return to prison.

***Unless we assume that she deliberately directed the same set of statements towards two individuals simultaneously, which, while not entirely impossible, is a bit far-fetched. (Or, just possibly, that she deliberately ignored Sarah as a slight in its own right. If so, however, it failed entirely.)

Generally, to my taste, too many of Christie’s work involve various surprise connections from the past, people living under assumed identities, and similar. In one extreme case, I believe “A Murder is Announced”, there are actually two (!) long separated siblings independently using assumed identities. That these surprise connections are not necessarily the culprits, actually makes matters worse: With Lady Westholme, the surprise connection was the reason for the murder—in other cases, we have both a murder and an unrelated surprise connection. (Not to mention the additional coincidence that these murders take place in connection with Poirot or Miss Marple far more often than could be statistically expected. More generally, if a crime-fighter goes on vacation, it appears a fictional necessity that a crime takes place under his nose…)

A related criticism is how often the murderer is someone originally not among the obvious suspects: If the murderer is someone unexpected in a single story, this is not a problem—it might even be good. However, when it happens in story after story, the effect will be ruined by readers who learn to expect the unexpected. Do X, Y, and Z inherit a fortune after the murder? Then X, Y, and Z are likely innocent, so let us focus on A, B, and C instead.

(These criticisms notwithstanding, I consider Christie brilliant.)

Excursion on incongruities:
When looking at fictional detectives, small incongruities are often very important, in that when nine out of ten facts fit a hypothesis, the hypothesis will turn out to be wrong. Sherlock Holmes might have gained less relative the police from his deductive abilities than from his search for small details and insistence that all the details be explained by a single hypothesis. This well matches my experiences from other areas, e.g. in that a minor deviation in a database is often a sign of faulty code—and possibly code that will at some point cause a major deviation. Scientific theories are a great source of examples—if the theory does not explain all that it is supposed to explain, and have all its predictions come true, something needs to be fixed. (My experiences with real-life crime-fighting is extremely limited, but the same almost must hold there, except as far as coincidences need to be taken into account—that cigar ash might have been left by the burglar, but there is also some chance that the butler had taken liberties and failed to clean up the evidence in the excitement after the burglary was discovered.)

Written by michaeleriksson

October 16, 2018 at 12:24 am