Michael Eriksson's Blog

A Swede in Germany

Archive for November 2018

WordPress and mangling of quotes

with 4 comments

Preamble: Note that the very complications discussed below make it quite hard to discuss the complications, because I cannot use the characters that I discuss and expect them to appear correctly. Please make allowances. For those with more technical knowledge: The entity references are used for what decimal Unicode-wise is 8220 / 8221 (double quotes) and 8216 / 8217 (single quotes). The literal ones correspond to ASCII/Unicode 34, which WordPress converted to the asymmetric 8220 and 8221. (I stay with the plain decimal numbers here, lest I accidentally trigger some other conversion.)

I just noticed that WordPress had engaged in another inexcusable modification of a text that I had posted as HTML by email—where a truly verbatim use of my text must be assumed.* Firstly, “fancy”** or typographic quotation marks submitted by me as “entity references”*** have been converted to literal UTF-8, which is not only unnecessary but also increases the risk of errors when the page or a portion of its contents is put in a different context.**** Secondly, non-fancy quotation marks that I had deliberately entered as literal UTF-8 had been both converted into entity references and distorted by a “fanciness” that went contrary to any reasonable interpretation of my intentions. Absolutely and utterly idiotic—and entirely unexpected!

*Excepting the special syntax used to include e.g. WordPress tags, and the changes that might be absolutely necessary to make the contents fit syntactically within the displayed page (e.g. to not have two head-blocks in the same page).

**I.e. the ones that look a little differently as a “start” and as an “end” sign. The preceding sentence should, with reservations for mangling, contain two such start and two such end signs in the double variation. This to be contrasted with the symmetrical ones that can be entered by a single key on a standard keyboard.

***A particular type of HTML/XML/whatnot code that identifies the character to display without actually using it.

****Indeed, the reason why I use entity references instead of UTF-8 is partially the risk of distortion along the road as an email (including during processing/publication through WordPress) and partially problems with Firefox (see excursion)—one of the most popular browsers on the web.

The latter conversion is particularly problematic, because it makes it hard to write texts that discuss e.g. program code, HTML markup, and similar, because there the fancy quotes are simply not equivalent. Indeed, this was specifically in a text ([1]) where I needed to use three types of quotation marks to discuss search syntax in a reasonable manner—and by this introduction of fanciness, the text becomes contradictory. Of course, cf. preamble, the current text is another example.

This is the more annoying, as I have a markup setup that automatically generates the right fancy quotes whenever I need them—I have no possible benefit from this distortion that could even remotely compete with the disadvantage. Neither would I assume that anyone else has: If someone deliberately chooses to use HTML, and not e.g. the WYSIWYG editor, sufficient expertise must be assumed, especially as the introduction of fancy quotes is easy within HTML it self—as demonstrated by the fact that I already had fancy quotes in the text, entered correctly.

Excursion of Firefox and encoding:
Note that Firefox insists on treating all* local text as (using the misleading terminology of Firefox) “Western” instead of “Unicode”, despite any local settings, despite the activation of “autodetect”, despite whatever encoding has actually been used for the file, and despite UTF-8 having been the only reasonable default assumption (possibly, excepting ASCII) for years. Notably, if I load a text in Firefox, manually set the encoding to “Unicode”, and then re-load the page, then the encoding resets to “Western”… Correspondingly, if I want to use Firefox for continual inspection of what I intend to publish, I cannot reasonably work with pure UTF-8.

*If I recall an old experiment correctly, there is one exception in that Firefox does respect an encoding declared in the HTML header. However, this is not a good work-around for use with WordPress and similar tools, because that header might be ignored at WordPress’ end. Further, this does not help when e.g. a plain-text file (e.g. of an e-book) is concerned. Further, it is conceptually disputable whether an HTML page should be allowed to contain such information, or whether it should be better left to the HTTP(S) protocol.

Advertisement

Written by michaeleriksson

November 29, 2018 at 8:27 pm

Multiple ideas vs. focused texts

with one comment

I have repeatedly encountered claims by authors* that their stories only truly work, come to life, whatnot, when they are based on two or more separate ideas. I have made the same, but more ambivalent, observation regarding my own (non-fiction) texts: The texts that I really have a drive to write, that are the most fun to write, that develop my own thoughts the most, …, tend to be the ones combining two or more ideas. The way of writing described in an older text also almost forces the inclusion of multiple ideas, even when the original drive was not rooted in more than one idea. On the downside, these texts are also the least focused, might give the readers less value for their time, and would be most likely to be torn to pieces if submitted to e.g. a college course on essay writing.**

*Including Stephen R. Donaldson with regard to his “Gap” and “The Chronicles of Thomas Covenant” series. As for the others, I have to pass, seeing that I only took superficial note at the time, and that these encounters stretch back over decades.

**However, I state for the protocol that I simply do not agree with much of the standard writing advice and that my deviations are more often rooted in disagreement with than ignorance of such advice. This includes, outside of very formal types of writing, keeping a tight focus on a single topic, or focusing on a single side of an issue. (The latter might be more convincing, but should a good essay aim at convincing or at giving insight and attempting to get at the truth?) It also includes “avoid the passive”, which I consider one of the worst pieces of advice a writer can get.

Consider, for a comparatively simple example, a few thoughts around the 2018 Chess World Championships and the fact that all the regular games ended in draws:*

*I originally intended to write a separate text on that matter. Since I sketch the material here, and have an ever-growing back-log, I will forego that text.

On the one hand, I have long observed that as a sport matures and as we look at higher levels within a sport, (a) players/teams tends to grow stronger defensively at a higher rate than they do offensively and/or that differences in scores tend to grow smaller, (b) the nature of the sport often changes.*

*Consider for (a) the chance of in soccer finding a five-goal victory margin or an overall of ten goals for a single game between national teams today vs. fifty years ago, between men’s teams vs. between women’s teams, or between Bundesliga teams vs. between local amateur teams. (With great reservations for temporary fluctuations, the presence of a truly exceptional team, unusually large mismatches in level, and similar.) Consider for (b) how a low level bowler aims to knock down as many pins as possible, hoping for a strike every now and then, while the pro is set on failing to get strikes as rarely as possible.

On the other hand, partially overlapping, it seems to me that chess is being crippled by too many draws. Notably, at increasingly higher levels of play, it becomes rarer and rarer for Black to win, implying that the “job” of Black is to get a draw. In contrast, the “job” of White is to win. However, even White’s win percentage is often so low that draw orgies result. Looking specifically at the 2018 World Championships, we also have another negative: The defending champion and eventual winner, Carlsen, was known to be stronger at speed chess than his opponent, Caruana—and in case of a tie after twelve regular games, the match would be determined by a speed-chess tie-breaker. Considering this, Carlsen had a strong incentive to play risk-free games, knowing that a tie was close to a victory and that draws in the individual games favored him. Going by the account on Wikipedia, this almost certainly affected the 12th game.* After twelve consecutive draws, the tie-break came, and Carlsen now won three games out of three… Similarly, even without a favorable tie-breaker, a player who got an early win might then just go for safety draws in order to keep that one-point advantage until the end.

*Going by the other game accounts, there were no strong signs of such caution. (I have not attempted to analyze the games myself, and I might even lack the ability to judge what is “safety first” play on this level.) However, another player in his shoes might well have played with an eye mainly at forcing a tie-break, and have viewed a victory within the regular portion as a mere bonus.

Looking more in detail, my plans did not advance so far that I can say with certainty what I would have written, except that it would have included a potential rule change to e.g. give Black more points than White in case of a victory and/or to make the split of points uneven (in favor of Black) in case of a draw. This would have had the intention of giving Black incentives to play harder for a win and/or to make White dissatisfied with a draw, and some discussion of how such rule changes could turn out to be contra-productive would have followed. For the latter, an intended parallel example was the off-side rule in soccer: Abolishing it would lead to more goals if the teams play as they do today and it could give incentives to play more aggressively through putting forwards higher in the field to await a pass; however, it could also lead to more defensive play through keeping defenders back even when attacking, in case the ball is lost and a quick counter-attack follows.

*For some value of exciting: I usually find watching soccer to be quite boring.

Here we also have an illustration of one of the problems with more focused texts: If I were to try to divide the above into two (or more) texts, they would each be missing something or be forced to partially duplicate discussions. It could be done. There might even be contexts when it should be done. However, this would entail more work than writing a single text, the result would be lesser in my eyes, and I would, myself, prefer to read the joint text.

The illustration would have been better, had I been further along in my planing. However, consider e.g. how a discussion of the off-side rule in the chess text would have been weakened without a discussion of the more general phenomenon and the context of the comparatively low number of goals in soccer (if in doubt, when compared to e.g. basket-ball or ice-hockey). Goals in soccer, in turn, would be a fairly uninteresting and loose special case without having an eye on the wider issue of (a) above. Or consider just discussing the “drawiness” of top-level chess without mention of (b) in general. Etc. For a good example of a text actually written, see [1]: Here a discussion of a specific PED-related controversy is combined with a general discussion of some sub-topics, and then transitions into a questioning of how reasonable the current take on PEDs is. (Could have been two or even three texts, had “focus” been a priority, but, to my taste, the actual text works better.)

Excursion on fiction and multiple ideas:
The above-mentioned claims by authors are likely mostly relating to fairly abstract ideas or broad themes that do not automatically point the way;* however, in my experiences as a consumer of fiction, the better works often have a number of ideas or concepts of a more concrete kind that combine to make them that much greater. For instance, “Star Wars” without light-sabers would still be “Star Wars”, but it would not have been quite the same. Similarly, “Star Wars” without the Force would still have worked, but … Or consider “Star Trek” and the individual series with and without holodecks. Clearly, the holodeck is not needed, but it adds so many great additional possibilities. It would certainly be possible to make a reasonably good “high concept” series around the holodeck alone. Similarly, “Chuck” basically combines two different TV series—comic spy adventures and the absurdities of a fictional electronics store. Taking just the latter and combining it with the family-life of Chuck would have made for a reasonable comedy series. Taking just the former in combination with family-life would have made for a very good action-comedy series. Having both in one series made it truly great entertainment.

*Donaldson speaks e.g. of a combination of leprosy and unbelief for “The Chronicles of Thomas Covenant”—the road from those ideas to the actual books is quite long and very different books would have been equally possible.

And, no, this is not limited to modern screen entertainment: Shakespeare’s “Romeo and Juliet”, e.g., is not just a tragedy or love story—but both. It also has more comedy in it than most modern rom-com movies… Then there are life lessons to be drawn, some remarkable poetry, and whatnot. At least the filmatisations by Zeffirelli (outstanding!) and Luhrman show the room for action scenes.*

*I am uncertain how this could have come across in an Elizabethan theater: On the one hand, the means would have been more limited; on the other, the standard of comparison was necessarily much lower. (Both movies also make remarkable use of music; however, that is independent of the underlying play.)

Written by michaeleriksson

November 29, 2018 at 6:59 pm

Good riddance, CEBIT!

leave a comment »

It appears that the once world-leading German computer fair CEBIT has been canceled: Good riddance! Let other trade fairs follow suit!

While I do not rule out that there are some special cases of fairs that make sense or some minor purposes of a fair that cannot be better solved through other means, fairs are largely pointless—mostly just diverting money to the fair industry and to the local city and its tourist businesses. For others, including regular tourists and “legitimate” business travelers, the effects are mostly negative. This especially through the great troubles of finding hotel rooms during fairs, and the often quite considerable price hikes* that take place on the rooms that can be found. (Note the similarity to the advertising industry, both in purpose and in that it brings more benefit to it self than to its customers—and is usually outright negative for everyone else.)

*During the largest fairs, I have seen prices more than doubled on some occasions.

Going as a consumer* is, judging by my own experiences, fairly pointless: The things that might be interesting to see are what everyone else wants to see, implying that there are queues and crowds. The actual information presented is typically minimal and/or extremely commercial. Information about e.g. products and services are much easier to find on the Internet or through qualified publications in today’s** world. The main benefit might well be the opportunity to get some freebies, e.g. a few magazines—but compared to the ticket price this will rarely be worth the trouble. (And most visitors will also have to factor in travel and hotel costs, etc.) Indeed, I have twice received complimentary tickets to specifically the CEBIT and still chosen not to go, considering the other costs involved and the time wasted too large to make it worth the effort…

*If allowed, as with CEBIT: Some fairs are “business only”, which I consider far more sensible, both through creating a greater focus and through reducing the damage to third parties.

**Note that the situation here and elsewhere might have been very different just a few decades ago.

The situation is very similar for those businesses who are there as passive visitors. They might in addition have the option to “check out the competition”, but since they will only see what the competition wants seen, the value is low. There are some networking opportunities, but these face the same popularity issues—especially, as these visitors are likely to be less important players, who bring comparatively little value to the popular targets… Such networking would be better handled by visiting a few conferences, where the participants are better filtered and more time for such purposes is available. Alternatively, a contact service* that matches up businesses with sufficient compatibility in mutual value is likely to create greater benefit.

*I am, admittedly, uncertain to what degree such exist and do a good job; however, I have seen the idea broached repeatedly over the years. If in doubt, creating such businesses and foregoing fairs would be an improvement.

For active participants (i.e. those who have own stalls and whatnots), the situation is a bit better, but mostly a fair amounts to a publicity opportunity or a “to see and be seen”*. Here we again have the popularity problem—the likes of Apple will garner great interest, while almost no-one will pay attention to an obscure ten-man company. At the same time, Apple does not need to go to trade fairs to get publicity… For that matter, running a product demonstration or a speech over the Internet is not hard, while e.g. putting up a sales brochure is utterly trivial.

*Likely with heavy emphasis on the second part. Indeed, my employer during the dot-com crash deliberately went to computer fairs, including the CEBIT, for the purpose of showing that the company still existed…

At the end of the day, the press and the executives might like fairs, but the benefits compared to the alternatives remain dubious for everyone except the fair organizers, the hotels, etc. For most others, the fairs are an outright negative. For instance, I could have saved many hundred Euros and at least several hours of accumulated hotel searches had it not been for the flooding of the Cologne* hotel-market that takes place again and again. Or consider the additional pressure on the (already strained) transports to and from Cologne. Or consider that a very large and central piece of real estate is occupied by the fair area, where there could have been apartment houses for hundreds, likely even thousands, of people, easing the pressure on the over-heated apartment market.

*Cologne is one of the leading fair cities in Germany, and I have spent a part of my freelance career working there. (But the CEBIT, to avoid misunderstandings, took place in Hanover.)

Written by michaeleriksson

November 29, 2018 at 12:38 am

A potential revamping of college tuition

leave a comment »

With regards to college/university there is a subset of problems that could be partially resolved in a simple manner:

  1. In order to ensure a high degree of equality of opportunity and social mobility, it must be possible even for people with low income and little wealth (be it own or when looking at the parents) to gain degrees. (Assuming that they are intelligent and hard-working enough to succeed with the academic side of the equation—few misconceptions are more dangerous than the belief that college creates diamonds out of charcoal.)
  2. Colleges cost money to run, and it is not optimal to finance them through public funds. Not only is the use of “someone else’s” money a bad idea in general, but here those that do not go to college are disadvantaged in an unfair manner.

    Note that this affects the U.S. too, because of the considerable “financial aid” given. Notably, the financial aid is also a driving force behind tuition increases—when the economically weaker buyers of a uniform product are given more money, the sellers have strong incentives to raise prices. The price raise then hits everyone, while only the weaker where given aid, which increases the group that would benefit from aid. To boot, the original aid receivers do not benefit as much as intended, creating a wish for more aid per person. Here there is a risk of a vicious circle.

  3. Academically poor students tend to cost a lot more money than the better students, e.g. in that they require more support outside of lectures and that they are the main reason why the highly inefficient lecture system is still “needed”.
  4. There is a severe over-inflow of students not suitable for college, who force further dumbing down, weaken graduation criteria, etc.
  5. In tuition-heavy countries, colleges have an artificial incentive to let students graduate, pass, get good grades, or even be admitted, irrespective of whether they have actually earned it.
  6. Excessive income, as e.g. with some U.S. colleges, leads to waste, including an every growing administration.

    (As an overlapping special case, it could be argued that the U.S. campus system is an evil per se, and that the students would be better off paying directly for own and independent housing, as they do in e.g. Sweden and Germany, rather than to pay the colleges to provide housing. Certainly, my impression of the living environment, from U.S. fiction and general reputation, points to it being positively harmful to someone who actually wants to study, which would make it a doubly poor use of money.)

  7. If only partially relevant: Popular programs* often have to reject even qualified students.

    *I use “program” to mean something at least somewhat structured, with an at least somewhat separate admission, and similar. Due to the wide variety of systems in use, this word need not be suitable everywhere. Note that the word “major” would implicitly exclude e.g. master programs and med school, which makes it highly unsuitable, even other potential concerns aside.

Consider the following solution sketch*:

*It is highly unlikely that this sketch would be viable without modifications and there are details to clarify. Complications include what exact numbers to use, whether borders should be sharp or fuzzy, what criteria should determine who belongs where, whether percentages or absolute numbers are better, how many categories are reasonable, what conditions are best suited for what category, …

Colleges are by law forced to let the top 10 percent of students study for free, with costs covered by the colleges’ funds.* Students from 10 (exclusive) to 30 (inclusive) percent are charged approximately at cost**. Students from 30 (exclusive) to 60 (inclusive) percent are charged at cost + some reasonable*** markup. The remaining students can be charged whatever the college wants. There is no additional financial aid.

*It is of fundamental importance that the colleges’ money be used. If, e.g., government money was given to the colleges to cover the costs, the system would fail.

**Based on a reasonable estimate of how much each student costs with regard to what directly relates to the education, e.g. salaries to professors for the courses taken, but not e.g. the cost of running the administration or various sports programs.

***Possibly, 500 or 1000 EUR/semester (resp. the purchasing-power adjusted equivalent in local currency), or some percentage of the costs (on top of the costs themselves).

In such a set-up, worthy students will rarely have financial problems; colleges can still earn plenty of money (but with less issues of insane surpluses); a very wide admittance would be possible, but the academically less* fit would tend to disappear when they discover that they fail to score well enough to study cheaply, which increases the quality of the graduates; etc. Note especially that while colleges might still have incentives for over-admission and “over-passing”, the students so favored would still need to pay their fees, and these incentives will then be largely countered by incentives for said students to drop out**. To boot, the colleges only have incentives to keep the students on—not to give them better grades than they deserve or to let them graduate before they have reached a certain standard.

*Note that these need not be unfit when it comes to a competitive program. In such cases, the effect is not so much a removal of the unworthy as it is a filtering based on result, where today a filtering based on expectation of result takes place. For instance, instead of admitting those with a GPA of 4.0 and leaving the 3.9s lying, a program could admit the latter too, and then let the students filter themselves out based on actual performance over the first few semesters. (But there might still be some programs where this type of increase is not plausible.)

**From the given program at the given college. It is quite possible that studies are continued with more success in a different program and/or at a different college.

In countries where various forms of public funding pay for significant portions of the cost and tuition is kept very low, this scheme would allow the introduction of higher fees (without negative effects on worthy students) and a corresponding reduction of the cost to the public: Instead of effectively shelling out money to everyone who wants to study, the money is limited to the worthy—or even to no-one, because the worthy are already covered by the fees paid by the unworthy.

Also note that the restriction on costs includable in the two mid-categories give incentives to keep administration and other overhead down. For instance, if a professor is given a raise, ninety percent of students can be charged extra—but for an administrator, it is only the bottom forty. Ditto if the number of professors respectively administrators per student is increased.

Excursion on actual costs:
Keep in mind that the actual cost of a student is much, much lower than what some U.S. fees could make one believe—this especially when we look at a “marginal”* student or a student bright enough to learn from books (instead of lectures) and to solve problems through own thinking (instead of being led by the hand by TAs). As I have observed, it would sometimes be cheaper for a handful of students to pool their money to hire a dedicated, full-time professor than to go to a U.S. college.

*I.e. an additional student added to an existing class, who will typically add far less to the overall cost than could be assumed by calculating the average cost per student.

To exactly quantify costs is hard to impossible, when looking at e.g. differences in class sizes, salaries of professors, the type of equipment needed or not needed in different courses, what type of work* the students have to present, etc. However, for a good student taking non-wasteful courses, the marginal cost might be a few hundred Euro per semester, and a few thousand should be plenty in almost any setting and even on average.

*Compare e.g. a math course with one or two tests to a writing course with a handful of essays, all of which should be given insightful feedback. (Whether they are given such feedback, I leave unstated.)

Excursion on percentages:
When percentages are used, we can have situations like someone dropping out of the top 10 percent because others dropped out entirely.* Originally, I saw this as negative; however, on further thought, in might work out quite well, seeing that the limit will grow tougher in the more advanced years, stimulating competitiveness and keeping the level of those who graduate even higher. However, some type of fail-safe might be beneficial, e.g. that the percentages are converted to absolute numbers at the beginning of each semester. (If there were a hundred students to begin with, the ten best students are guaranteed top-level status, even if the class has shrunk to ninety at the end of the semester.)

*E.g. because he was the tenth best student in a class of one hundred, and is now the tenth best in a class of ninety.

Excursion on choice of college, program, whatnot:
A potentially positive side-effect is that strong students have new incentives to consider less popular colleges and programs. For instance, someone who could be accepted to Harvard, but with a considerable risk of having to pay, might prefer a college where he is almost guaranteed to be a top-10-percenter. Such decisions might in turn have effects like creating a downward pressure on tuition fees of expensive colleges, spreading talent more uniformly, reducing the “networking effect”* of top colleges, etc.

*According to some, the main benefit of going to e.g. Harvard is not the level of the education, but rather the career-boosting contacts made there. Also note that networking is often just a form of corruption—something that damages an employer and/or society for the benefit of the networker. Such damage can e.g. occur when someone is hired because of “whom he knows” rather than “what he knows”.

Excursion on the freedom of the colleges:
One negative effect is that it limits the freedom of colleges regarding pricing, which could have negative market implications and/or be ethically dubious. This complication should be seriously considered before an implementation is attempted.

A reconciliation might be to only put some categories of colleges under the suggested system, including all that are state owned/run, all that have received non-trivial public support within some number of years prior to the “now”, and all that have directly or indirectly benefited from financial aid to their students in the same time frame.

However, if push truly comes to shove, this is one area where even such a strong regulation would be acceptable to me—in light of the catastrophic decline of higher education over the last few decades and the great threat that an even further decline poses.

Excursion on living costs:
In a non-campus system, topics like rent might need additional attention. It might e.g. be necessary to allow some amount of financial aid, preferably in the form of loans, to cover such costs. However, importantly, this would be something between the government and the student—with the college having nothing to gain. Further, it is not a given that such aid would be necessary on a larger scale, especially as societies grow more affluent: For very many, living with the parents, monetary help from the parents, working summers, private loans based on expected future income, or similar, can provide a solution that does not use tax-payer’s money and does not have a major impact on success in college.

Remark concerning “Thoughts around social class”*:
This text is not strictly a part of that text series, but there is some overlap and the implied division of students into more and less worthy categories is highly compatible with an intended future installment.

*See e.g. the last installment published at the time of writing.

Written by michaeleriksson

November 26, 2018 at 7:29 am

Revisiting verbosity based on Wesnoth

with one comment

Since writing a text dealing with verbosity (among other things), I have dabbled with Wesnoth*, which well illustrates the problems with undue verbosity, lack of tempo, and similar:

*See an excursion for some information and a few notes on terminology beneficial for the non-player’s understanding of this text.

  1. Most campaigns contain an undue amount of narration and dialogue*.

    *Which is fixed in advance. Only very rarely can the player influence the development of the dialogue, and then only within a small set of fix choices.

    Now, a good story can make a campaign more enjoyable*; however, the point of the game is to play the game—if I want to read an extensive story, I can just grab a book.

    *Especially, through adding aspects with no correspondence in the “pure” game, e.g. character background or a romantic sub-theme.

    Worse: Most of the resulting text is pointless. It adds no value to the story or the overall enjoyment; is repetitive; states what should be a given; or is otherwise a waste of time. (That the text is very often poor by criteria relating to prose, effectiveness, story-telling, …, does not help—but that is an unrelated topic.)

    For instance, very many scenarios start with multiple enemy leaders saying variations of “I will crush you, puny humans!” or “Victory shall be ours!”—which reminds me of German sports writers, who do not tire of headings like “X will den Sieg!” (“X wants to win!”). What had they expected that made that news-worthy?!?

    Another complete idiocy is “war council” scenarios where various characters make mostly pointless statements, sometimes leading up to half-a-dozen characters, one after the other, saying “Agreed!” (or something to that effect)—where a simple “(All agree.)” would have done just as well, with a fraction of the player’s time wasted. Usually, the entire council could have been compressed into just a few lines of dialogue or replaced by a simple narrative message.

    The bulk, however, is lost on unduly long narration, mostly amounting to filler.

    To boot, if a campaign is played more than once, the value of the (textual parts of the) story are diminished further (while the non-story parts remain similarly interesting to the first time). What might be acceptable the first time around, need not be so the second, third, or fourth time.

    Sometimes, it is so bad that I skip entire sequences of story (which is, fortunately, possible as a lesser evil)—but am then (a) left with no benefit at all from the story, (b) often lack context,* and (c) can miss various hints to optimal game play given in the text**.

    *E.g. in that I do not know why I suddenly have an ally or why I am suddenly trying to defeat a band of orcs, instead of those undead that had hitherto been the main enemy.

    **E.g. that a wooded area contains hidden enemies or that some aspect of the standard game-mechanisms has been temporarily altered.

    Most campaigns would be better by cutting the text in half; some would be better by cutting it to a tenth. (Note that I do not say that the story should be cut—only the text.) Generally, it is important to understand that different types of work require different types of writing—a game is not a novel, a play, or even a comic.

  2. The previous item is made the worse by limitations in the way that the game displays text: A longer piece of narration is displayed with no more than a few lines at a time (the next few lines following after user confirmation) and in an annoying manner, where each line is slowly blended in, one after the other. (Fortunately, this blend-in can be skipped by pressing the space key; however, this risks skipping too far, and a setting to skip the blend-in as a matter of course is not present.) Similarly, dialogue, even single words, is always displayed individually for each character speaking. Both imply that the user (even when wanting to read) has to hit the space key every few seconds; both have negative effects on strategies like getting a cup of coffee between scenarios to read the narration and dialogue at the beginning of the next scenario in a fully relaxed state.

    A particular annoyance with dialogue is that any utterance causes the view of the “board” to be focused on the speaking character, which leads to an additional delay and implies that the focus will usually end up at a different portion of the board than before the dialogue.*

    *Since the original focus is not restored. This is OK for pre-scenario dialogue, but problematic with in-game dialogue: Consider making a move to attack, having that attack interrupted by a triggered dialogue, and then having to scroll back to attempt the attack again… This leads to yet another unnecessary delay.

  3. The problems are not limited to text. For instance, some war-council scenarios contain sequences of half-a-dozen characters moving across the board, saying something, and then moving back across the board. These movements bring no value, appear to be unskippable, and take an excruciating* amount of time, during which the player can do nothing within the game. Still, some campaign makers have deliberately taken the effort to add these “value subtracted” moves…

    *I play with the animation speed increased by a factor of four (and have all unnecessary animations turned off). Even so, such sequences are horribly slow. With default settings, the best bet would be to grab a book until the movements are over—which really shows how redundant they are. (Another interface quirk is that the next faster setting is a factor of eight, which would be beneficial here, but might make other portions of the game move too fast.)

  4. A related scenario-error within regular game play is to involve too many units at the same time. For instance, there are a some battle scenarios (e.g. in “Legend of the Invincibles”) with more than a hundred AI-controlled units on the board at the same time (almost all of which are moved every single round)—and where it takes several rounds for the player and the AI-controlled enemy to even make contact.* The ensuing (mostly) unimportant movements, can go on for minutes… Even after contact is established, it takes quite a while before the majority of the units are actually involved in fighting—and that often occurs because sufficiently many of units have finally been killed off…

    *A better way to handle so large battles is to give the opponents less “starting gold” and more “income” or otherwise delay the “recruitment” (without reducing the total number of units eventually involved). A partial improvement is to reduce distances between opponents, but this could lead to a too fast defeat of some of the enemies or increase the influence of luck.

    In such cases, I have even made my own moves, done something completely different while waiting for the computer to make its moves, and then just checked whether the outcome was sufficiently satisfactory* when it was my turn again. Of course, this work-around is often foiled by some random dialogue in the middle of the battle, e.g. when an important enemy unit died. I then have to click through the dialogue, restart the battle, and go back to my “something completely different” for another few minutes…

    *With an eye on two things: Firstly, the loss of some specific units can lose the game outright. Secondly, if too large losses of other units occur, an eventual victory would by Pyrrhic. In both cases, it is time to start the scenario over with a better approach.

In the defense of these campaigns, they are contributed by various users and, therefore, rarely written by professionals. Then again, the more “professional” a campaign appears in other regards, the more text there tends to be (both in general and with regard to “pointless” text).

Excursion on Wesnoth, background information, and terminology:
The games is officially called “Battle of Wesnoth”. It is a turn-based strategy game, mostly played against an AI, which I played very often some years back—before frustration with too great an influence of luck, a poor user interface, and many idiocies in campaigns eventually drove me away. (The issues discussed here relate to literal or metaphorical verbosity—the overall list would be much longer.)

A “campaign” is a series of linked scenarios, roughly equivalent to the overall adventure or war. A “scenario”, in turn, is roughly a sub-adventure or a single battle. A “unit” corresponds to a piece in chess. I have otherwised tried to be low on “technical terms”, in favor of what those unexperienced with computer games and/or Wesnoth might find understandable.

Note that some descriptions above have been simplified compared to actual play. (For instance, even the large battles scenarios discussed above will typically start with only a handful of units, and see armies rapidly expand through “recruitment”.)

Those interested can download it for free from the official Wesnoth website, which also provides more detailed knowledge than given here.

Disclaimer: I played using the latest version available in the standard Debian repositories (1.12), which is not the latest version released. However, this should only affect general game-features, not individual campaigns. Further, the user interface has never improved* much in the past, leaving me pessimistic concerning later versions.

*Add more unnecessary or even annoying animations—yes. Tweak the looks of various units—yes. Improve actual usability—no.

Excursion on reading speed:
I suspect that some of the above is worse for those who read or process information faster, e.g. in that the “coffee strategy” will work better for a slower reader, who will hit the space key less frequently and have more time to relax during an individual portion of text. (On the other hand, a slower reader will, obviously, need longer to reach game play, and might grow more frustrated with the length of the delay.)

Excursion on “The Elements of Style”:
“Omit needless words” is likely the most famous claim in that book. Examples like Wesnoth and Der Untergang des Abendlandes really drive home the point. Notably, the main problem with both is the sheer quantity of needless words (and needless movements, etc.). The latter also shines a different light on this recommendation, in as far as “The Elements of Style” was written in a different era, when texts like Spengler’s were far more common than today, and the advice correspondingly more beneficial: Looking at typical writing back then, it was likely the single most important advice to give; today, it is “merely” good advice. For instance, my recent criticism of Stephen King’s novels (as too thick for their own good) is not rooted in individual formulations being unduly long*, but in problems on a higher level, e.g. individual scenes that could be cut out or shortened without loss.

*His sentences are reasonably compact—certainly, more so than my own…

Written by michaeleriksson

November 18, 2018 at 11:43 pm

Conflicting own beliefs and what to do about them

with one comment

In the set of beliefs* held by anyone, there will be occasional real or imagined conflicts (consider also e.g. the concepts of “cognitive dissonance” and “doublethink”). People differ mainly in (a) the degree that they are aware of and (b) how they handle these conflicts. Unfortunately, most people are unaware of most or all conflicts that arise, make no attempts at detecting them, and are prone to just explain away the conflicts that are known—even descending to outright doublethink.** A particular issue with awareness is that a too faulty or incomplete understanding can make such conflicts go undetected.***

*I use “belief” as a catch-all that, depending on context, could include any or almost any belief, idea, opinion, whatnot that implies or would imply something about something else. This includes e.g. “cucumbers are green”, “cucumbers are blue”, “God does [not] exist”, and “I [do not] like chocolate”.

**This includes such absurdities as simultaneously professing to believe in Evolution and Gender-Feminism. Indeed, a great deal of my annoyance with politics/ideology (in general) and Feminism/Leftism/PC-ism (in particular) results from the adherents ever recurring faults in similar directions.

***Consider again Evolution vs. Gender-Feminism: It is, for instance, highly unlikely that evolutionary processes would generate physical differences while keeping mental abilities identical—but exactly that is seen as a given by most Gender-Feminists (and a significant portion of the PC crowd, in general). Similarly, it is highly unlikely that the different roles of men and women in most societies over thousands of generations would have left no trace in form of e.g. natural inclinations. A Creationist–Feminist match-up would be less prone to contradictions.

In many cases, these conflicts are sufficiently trivial that they may be neglected.* For instance, that someone has two favorite dishes, music bands, movie stars, …, rarely has major impact on important decisions.** When it comes to topics that can have a greater impact, especially on others, care should be taken, however. Consider e.g. questions like how to vote in an election, what recommendations to make to others, what agendas to push, …—here it is important to have a sufficiently sound view of the topic; and if beliefs conflict, the view is unlikely to be sufficiently sound.

*A resolution can still bring benefit, e.g. through better self-knowledge, and I would not advice against the attempt.

**However, the resolution is often fairly simple, e.g. that none of two is the favorite and that the word “favorite” is best avoided; or that an opinion has changed over time, while still being professed out of habit.

Giving blanket rules for detection is tricky, but actually reading up* on a topic, gaining an own understanding (as opposed to parroting someone else’s), and deliberately trying to see the “bigger picture” and making comparisons between different fields and ideas, can all be helpful. Above all, perhaps, it is helpful to actually think through consequences and predictions that can be made based on various beliefs, and looking at how they stack up against both each other and against observations of reality. In my personal experience, writing about a topic can be an immense help (and this is one of the reasons why I write): Writing tends to lead to a deeper thought, a greater chance of recollection in other contexts, and a thought-process that continues intermittently long after a text has been completed.

*Note especially that information given in news papers, in school, or by politicians tends to be too superficial or even outright faulty. Wikipedia was once a good source, but has deteriorated over the years (at least where many topics are concerned). The “talk” pages can often contain a sufficient multitude of view-points, however.

If a conflict has been detected, it should be investigated with a critical eye in order to find a resolution. Here there are at least* five somewhat overlapping alternatives to consider: (a) One or both beliefs are wrong and should be rejected or modified. (b) Both beliefs have at least some justification and they should be reconciled, possibly with modifications; e.g. because they cover different special cases. (c) The conflict is only apparent, e.g. through a failure to discriminate. (d) One or both beliefs are not truly held and the non-belief should be brought to consciousness; e.g. because profession is made more out of habit than conviction. (e) The support of both** beliefs is approximate or tentative (awaiting further evidence), and (at a minimum) this condition should be kept in mind, with revisions according to the preceding items often being necessary.*** Note that the above need not result in rejection of one belief—it can equally be a matter of modification or refinement (and it can also happen to both beliefs). This is one reason why investigation is so beneficial—it helps to improve one’s own mind, world-view, whatnot.

*A deeper effort might reveal quite a few more alternatives. I write mostly off the top of my head at the moment.

**Here it has to be both: If one belief is taken as true and only one as approximate, then it would follow that the approximate one is outright faulty (at least as far as the points of conflict are concerned), which moves us to the “One” case of (a).

***For instance, if two physical theories are not perfectly compatible, the realization that physical theories are only approximations-for-the-now (eventually to be replaced by something better) gives room for an “approximate belief” in either or both theories. As long as work proceeds with an eye at the used assumptions, with the knowledge that the results might not be definite, and while being very careful in areas of known conflict or with poor experimental verification, this is not a major issue. Indeed, such “approximate belief” is par for the course in the sciences. In contrast, if someone was convinced that both were indisputably true, this would be highly problematic.

Again, giving blanket rules is tricky, especially with the very wide variety of fields/beliefs potentially involved and with the variety of the above cures. However, actually thinking and, should it be needed, gathering more information can be very productive. Having a good ability to discriminate is helpful in general; and with (b) and (c) it can be particularly beneficial to look at differences, e.g. if there is some aspect of a case where one belief is assumed to apply that is not present in a case where the other belief is assumed to apply. With (d), it is usually mostly a matter of introspection. (In addition, the advice for detecting conflicts applies to some parts here and vice versa. Often, the two will even be implicit, hard-to-separate, parts of a single process.)

For a specific, somewhat complex example, consider questions around what makes a good or poor book, movie, whatnot—especially, the property of being hackneyed: On the one hand, my discussions of various works have often contained a complaint that this-or-that is hackneyed. On the other, it is quite common for works that I enjoy and think highly of (at least on the entertainment level*) to contain elements of the hackneyed—or even be formulaic. Moreover, I rarely have the feel that this enjoyment is despite of something being hackneyed—this weakness, in it self, does not appear to disturb me that strongly.

*Different works serve different purposes and should be measured with an eye on the purpose. When I watch a sit-com, depth of character is far less important than how often and how hard I laugh; the romance in an action movie is a mere bonus (or even a negative, if there is too much); vice versa, an action scene in a rom-com is mere bonus; plot rarely makes sense in non-fiction; etc. For more “serious” works, more serious criteria and higher literary standards apply.

Is my explicit complaint compatible with my implicit acceptance? To some degree, yes; to some degree, no.

On the “no” side: I suspect, after introspection, that I do or do not find a certain work enjoyable, thought-worthy, whatnot, based on criteria that are not explicitly known to me.* If I find enjoyment (etc.), I am less likely to look for faults; if I do not, I am more likely to look for faults—but there is no guarantee that my original impression was actually caused by the faults now found. Some will almost certainly have been involved; others need not have been; and there might have been other faults involved that I never grew explicitly aware of.

*There are many aspects of different works that can individually have a large impact, and the end-impression is some form of aggregation over these aspects. For instance, consider the impact of music on movies like “Star Wars” and “Vertigo” or on TV series like “Twin Peaks”—change the music, and the work is lessened. Notably, the viewer is rarely strongly aware of the impact of the music (even be it hard to miss in the aforementioned cases).

On the “yes” side there are at least three things to consider: Firstly, a work can be hackneyed and have sufficient other strengths to outweigh this. Poor works are rarely poor due to one failure—they are poor because they fail on numerous criteria, e.g. (for a movie) being hackneyed and having a poor cast, wooden dialogue, unimpressive music, … Being hackneyed is, alone, not a knock-out criterion—being original is an opportunity to gain points that a hackneyed work simply has not taken. Secondly, different criteria can apply to different works,* and being hackneyed is not necessarily an obstacle for the one work, even though it is for another. Thirdly, if something is known to work well, it can be worth using even if it is hackneyed—“boy meets girl” has been done over and over and over again, but it still works. (See also an excursion below.)

*Partly, as in a previous footnote; partly, with an eye on the expected level of accomplishment. For instance, my very positive discussion of Black Beauty must be seen as referring to a children’s book—had I found the exact same contents in a work with the reputation and target group of e.g. James Joyce’s “Ulysses” (which I have yet to read), I would have been less enthusiastic.

All in all, I do not see a problem with this conflict in principle; however, I do suspect that I would benefit from (and be fairer in detail* by) looking closer at what actually created my impression and less closely on criteria like “original vs. hackneyed”. The latter might well amount to fault finding or rationalization. To boot, I should pay more attention to whether specifically something being hackneyed has a negative effect on me (beyond the mere failure to have a positive effect through originality).

*I doubt that my overall assessment would change very much; however, my understanding and explanation of why I disliked something would be closer to the truth. Of course, it might turn out that being hackneyed was a part of the explanation in a given case; however, then I can give that criticism with a better conscience…

Excursion on expectations:
In a somewhat similar situation, I have sometimes complained about a work having set a certain expectation and then changed course. That is an example of another issue, namely the need to discriminate*. There are setups and course changes that are good, in that they reduce the predictability, increase the excitement, whatnot. This includes well-made “plot twists”. There are, however, other types of expectations and course changes that are highly unfortunate—including those that make the reader (viewer, whatnot) set his mind on a certain genre or a certain general development. A course change here is likely to detract from the experience, because different genres are enjoyed in different manners, and because there is often an element of disappointment** involved. Depending on the change, there can also be a delay and reorientation needed that lessens concentration and enjoyment further. Another negative type of changes is (almost always) those that try to rejuvenate a TV series or franchise by sacrificing what once made the series worth watching, by “jumping the shark”, and similar.

*Yes, discrimination is also a sub-topic above; however, here we have a too blatant case to be truly overlapping: There is no need for me to re-investigate my own beliefs—only to clarify them towards others. (Except in as far as I might have suffered from a similar fault-finding attitude as discussed above, but that attitude is just an independent-of-the-topic aspect of an example.)

**Note that this holds true, even when the expected and the delivered are more-or-less equivalent in net value. (However, when there is a significant improvement, the situation might be different: I recall watching “Grease” for the first time, with only a very vague idea of the contents; seeing the first scene; and fearing that I was caught in the most sugary, teenage-girls-only, over-the-top romance known to man—the rest was a relief.)

Excursion on “boy meets girl”:
An additional, but off-topic, complication when considering the hackneyed, is that there comes a point of repeated use when the hackneyed does not necessarily register as hackneyed and/or is so central to a genre that it is hard to avoid. Consider the typical “boy meets girl” theme. This, in it self, is so common and so basic to movie romance that it rarely registers as hackneyed. In contrast, the rarer “childhood friends fall in love” does*. With “boy meets girl”, the question is less whether the theme lacks originality and more whether the implementation is done with sufficient quality** and whether the details are also lacking in originality (is there, e.g., yet another desperate chase to and through an airport at the end?).

*At least to me, which also shows that there can be a large element of subjectiveness involved.

**Oscar Wilde defended against accusations of plagiarism by pointing to the difference between adding and removing a petal when growing tulips: To repeat in a better manner what someone else has already done, is not necessarily a fault.

Excursion on good fiction:
More generally, I am coming to the conclusion that fiction (and art, music, whatnot) either works or does not work—and if the end result works, an author (movie maker, whatnot) can get away with more-or-less anything along the road. This includes the hackneyed, poor prose, absurd scenes, artistic liberties with science, a disregard for convention and expectation, the tasteless, … (But the question of “because or despite?” can be valuable, especially with an eye at a different reactions among different readers.) The proof of the pudding is in the eating—not in the recipe.

Written by michaeleriksson

November 17, 2018 at 2:53 am

Adults say the darnedest things

with one comment

I just re-encountered the fiction (and real-life) cliche of the child–adult exchange “He started it!”–“That is no excuse!”. This is a good example of adults telling children things that simply do not make sense,* and that are likely to leave the children unconvinced: “He started it!” is not just an excuse—it is a perfectly legitimate reason. There might be situations where it can be pragmatically better to turn the other cheek, try to deescalate, find a more constructive solution than retaliation, whatnot; however, that has no impact on the ethics of the issue and expecting a child to understand such matters is highly optimistic.** Furthermore, there are many cases where retaliation in kind is the best solution, especially when boundary pushers and bullies are concerned (which will very often be the case with children): Both being exposed to consequences for inappropriate behavior and having to take a dose of one’s own medicine can have a great positive effect in limiting future inappropriate behavior.

*I suspect that this is partly due to the answer being dishonest, that the adult is motivated by something unstated. (“What” will depend on context, but a fear of negative consequences from e.g. fights between children could be high on the list, as could a wish to just keep some degree of peace and quit.)

**And arguments in that direction are usually absent to begin with.

Note how the “adult” reply makes no attempt at providing reasons or actually convincing, and how a discussion of pros and cons is entirely absent—it is just an (invalid) claim that the child is supposed to take at face value “because I said so”. No wonder that children are not more cooperative…

The “because I said so” is, of course, a good example in its own right—the effect of such argumentation is that the child’s rejection of a claim is complemented by a feeling that the adult is an unreasonable dictator. It might or might not create compliance in action, but compliance in thought is not to be expected. Worse, it could have a harmful long-term effect on the relationship. It is true that there might be a point where a child is too young or the situation too critical for a deeper discussion to beneficial; however, the uses that I have seen (be it in fiction or in real life) would usually have benefited from a motivation.* Consider** e.g. a child’s refusal do the dishes countered with “because I said so” vs. “we agreed that everyone should take a turn—and today is your day”; the adult’s refusal to play based on “because I said so” vs. “I am sorry, but I am dead tired and need to take a nap”; or even any discussion resulting in “because I said so” vs. “I pay the bills; I make the rules”. The last example might superficially seem to offer no real difference, but most children (above a certain age) will at least be able to see the adult perspective of the bill payer and the hypothetical alternative of buying greater freedom through going hungry and homeless—but not of the more power-based “because I said so”. (Also note that “I am the parent; I make the rules” is closer to the dictator than to the bill payer.) At the same time, I advice against reasonable sounding arguments that do not make sense on closer inspection or that could back-fire.***

*Generally, even among adults, I recommend that any rule and whatnot be given some form of motivation, so that those affected know why something should or should not be done. This to increase the chance of compliance, to make more informed choices possible (e.g. when dealing with interpretation and special cases), and to allow a critique of the rule with an eye on future improvement.

**I stress that I do not consider the alternative arguments to be silver-bullets—dealing with children is hard and often amounts to a “damned if you do; damned if you don’t” situation. They are, however, improvements.

***E.g. “That is no excuse!” above. A more interesting example stems from my own childhood (pre-VCR): My mother argued that she should watch the news on the bigger color-TV and I a simultaneously broadcast movie on the smaller black-and-white one, because she had not seen the news in a week (due to a study absence). From my perspective, the negative effects of the inferior device on a movie were larger than on the news, and it might be years (not a week) before another opportunity to watch that movie arose. The result? I was left with not only an implicit “because I said so”—but also with the feeling that my mother was dishonest… (Adult me is open to the alternative that she simply had not thought the matter through.)

A sometime reasonable, but more often misguided, argument is “And if your friends all jumped off a bridge, would you follow them?!?” (with many variations). The analogy involved is usually inappropriate (notably regarding dangers) and/or too subtle (the “lemming” aspect). Normally, the only justification is that it came as a response to a weak argument from the (typically?) teenager, e.g. “but all my friends are going”. Here, however, such “smart ass” answers are not helpful. Better would be to evaluate the suggestion (e.g. going to a certain party) on its merits, factoring in both the fact that “all my friends” can seem like a strong argument to the teenager (even when it is not), and that there are at least some cases where the argument has merit through its impact on teenage life* or through giving a different perspective**.

*The degree to which adults should be concerned about this is limited, but it is not something to ignore entirely. There are aspects of popularity and networking that might be largely alien to an adult (and to some teens, including my younger self); however, they are there and showing them some consideration is not wrong.

**Notably, that something is wide-spread and tolerated by other parents could point to a too restrictive own attitude.

Generally, I caution against giving “smart ass” answers to children, and recommend using only factual arguments. For instance, my school class would sometimes be asked to explain/solve/perform/… something that had simply never been taught (especially when teachers changed). Typically, someone would reply with the idiomatic “det har vi inte fått lära oss”, which carries the clear intent of “that has not been taught” (and an implicit “so you cannot fairly require us to know”). Unfortunately, this phrase is vulnerable to the deliberate misinterpretation of “we have not been allowed to learn this” and the answer was invariably along the lines of “Who has forbidden it?”. The results on the class were never positive… To boot, this answer is doubly unfair in that (a) the students cannot be expected to guess what the next teacher considers “must haves” when the previous teacher saw things differently, and (b) traditional schooling severely limits the time, energy, and (often) interest available for own learning in addition to the official curriculum. (Note that both, even taken singly, invalidate the potentially valid angle that this answer does have—that learning should not be limited to school and that teachers usually indicate the minimum to learn.)

In a bigger picture, adults often impose constraints or obligations on children that make little sense. For instance, what is the point of a child making his own bed, should he not see a benefit for himself in doing so? There is no automatic advantage in a made bed and if no-one else is hurt by it… Indeed, apart from when I receive visitors (actual reason) or change the sheets (trivial extra effort), it might be more than twenty years since I, as an adult, made my bed.

Excursion on women as perpetrators:
While errors like those above are by no means limited to women, they do appear to be considerably more likely from women. It is conceivable that at least some of the problem stems from an arbitrary imposition of some irrational values that often occur among women (e.g. that any and all violence no matter the reason is evil, or a wish for orderliness-for-the-sake-of-orderliness).

Excursion on fairness:
Much of the above is related to the feeling of being unfairly treated. A fair treatment is by no means a guarantee for a happy and well-behaved child; however, the opposite will make things worse. Where fair treatment might be important to most adults (at least when on the receiving end…); it is paramount to most children.

Written by michaeleriksson

November 13, 2018 at 2:08 am

Thoughts around social class: Part II (prices etc.)

with 3 comments

As I have often remarked, the best way to create a society with a higher degree of wealth for those* with relatively little wealth and income, is not to redistribute the existing wealth (often at the risk of reducing it)—but to increase the overall wealth (even should it result in larger differences in distribution).

*I am troubled to find a good phrasing, with “poor” often being highly misleading, “disadvantaged” simultaneously a euphemism and (potentially) interpretable as a statement about opportunity (where the intended meaning relates to outcome), “lower class” too fixed in perspective, “less well off” either a euphemism or covering too large a group (depending on interpretation), …

The most obvious sub-topic is economic growth (e.g. in the rough GNP sense); however, for my current purposes, the area of prices and purchasing power is more relevant.* Trivially: If earnings rise faster than inflation** then every major group will (in real terms) earn more.*** This is, in turn, closely connected to factors that, directly or indirectly, relate to economic growth, including government policies, introduction of new or improvements to old technologies, energy prices, wastefulness or efficiency of business planning, …

*However, I have a text planned on some other aspects relating to growth.

**But note that inflation also has an effect on e.g. bank balances, which implies that not everyone will automatically grow wealthier in a stricter sense. These effects, however, will naturally hit people harder the more money they have—and might even be beneficial to those in debt.

***With a number of caveats and reservations when we look at the gritty details, e.g. that the distribution of increases is sufficiently reasonable, that there are no upsetting changes in (un-)employment patterns, and similar. Discussing such complications would lead to a far longer text.

A few observations relating to this sub-topic:

  1. The current “economic power” of e.g. a well-todo (but not outright rich) German is quite great in some areas, e.g. relating to food; however, it is quite poor in others, notably where the government or major businesses tend to be involved. For instance, laying a single meter of Autobahn costs roughly six thousand Euro—under ideal circumstance. In extreme cases, it can be more than twenty times as much. (Cf. [1], in German.) A clear majority of all Germans could not afford to build a single meter of Autobahn out of their monthly income—even taxes and living expenses aside… Looking at “discretionary income”, most would need to work for several months for this single meter—and real low-earners might need years.

    Through such examples, we can see a clear difference between living in a wealthy/well-fare/whatnot state and actually being wealthy. Indeed, as will be argued in a later installment, the vast majority of people are still (and might permanently remain) second- or third-class citizens in a bigger picture. (While, I stress, having far less to complain about than their grand-parents.) This includes very many who typically consider themselves successes in life, e.g. middle managers, most upper managers, professionals in good employment or running small businesses, …

    The Autobahn example also raises some questions on the effective use of tax-payer’s money: Chances are that these costs could be a lot lower with a greater efficiency—but when the politicians pay with someone else’s money, there is little need for efficiency.

  2. A particularly troublesome issue is rent, prices of apartments/houses/land, and building costs: Looking at the vast improvements in most other areas (in terms of better products and lower prices) we might expect even relatively poor earners to affordably live in their own houses or large apartments. The reality, excepting some unattractive areas, is very different. In booming areas, prices can even be preventative for many. Even in non-booming areas, the monthly rent or mortgage payment is often the single largest expense. To further increase the economic well-being of the people, reducing these prices should be a priority.

    To some part, these prices are caused by high localized population growth that is hard to work around in a timely manner—and lack of land can be a long-term issue for the duration. (Someone happy with an apartment can be accommodated e.g. by building higher; someone looking for a large garden either has to be loaded or live somewhere else.) However, there are other issues, including too long delays in building new apartments, building* costs, taxes**, luxury renovations***, “unnecessary”**** and temporary***** rentals, and undue realtor fees (see also several older texts, e.g. [2]).

    *For one thing, these are generally quite high in Germany, for reasons that include great demand, personnel costs (taxes and the employment construct; cf. a later installment), VAT, and a mentality with a disconnect between the value delivered and the price. For another, building methods, materials, “pre-fabrication”, …, have not advanced at the rate that they should have—possibly, because the building industry has little incentive for progress.

    **If a landlord makes a profit, he must pay taxes. Even if he does not make a profit, VAT will often be an issue. (Generally, note that taxes do not just hit an employee when he earns his money—they also hit him when he spends it, although usually in less obvious manners than income tax etc.)

    ***German law allows landlords to make many renovations, with a corresponding rent increase, even against the will of the tenant and in alteration of the terms of the contract. This is often used to artificially increase the rents considerably, and often with the side-effect that old tenants are forced to move out to be replaced by better earners.

    ****A common investment strategy in Germany is to buy a single apartment for the purpose of letting it for rent. This does increase the number of available rentals, but it also decreases the number of apartments available to those purchasing for own living, which (a) drives the prices up unnecessarily, (b) forces some people to rent who otherwise would buy.

    *****In times of project work, temporary assignments, and whatnot, increasing numbers work in cities for so short times that it does not pay to rent or buy a regular apartment, but still long enough that living in a hotel is unnecessarily expensive. This has led to a market of furnished apartments that are rented for weeks or months at a considerably higher than ordinary rent—and each of these apartments is removed from the regular market, increasing the deficit.

  3. A drop in prices is increasingly countered by product alternatives, product improvements, and product “improvements”, that partially or wholly move inexpensive products of the market in favor of more expensive ones. Consider e.g. the boom around various coffee machines, like Nespresso, Dolce Gusto, Senseo, which allows the sale of coffee grounds with an immense increase in markup.* Another good example is the continual replacement of computer models with more powerful and pricey versions. This is to some degree good, however, the simple truth is that, for most people, a modern computer already is more powerful than it needs to be, and that the average customer would be better off if technological advancements were directed at lowering costs. A particularly perfidious** example is toilet paper, which becomes more and more expensive the more plies it has, even at the same overall quantity***—and where even two-ply paper has been artificially removed from the B2C market.

    *This is an example where the customer still has the option to use the older and cheaper versions—and often are better off doing so. For instance, I have repeatedly had a Nespresso in temporary (furnished) apartments, but actually grew tired of the taste and tended to prefer drip brews. In my own apartment, I have a Dolce Gusto, which I used on a daily basis for a while, enjoying the greater variety, but I ultimately returned to drinking drip brews almost exclusively—I have not used the Dolce Gusto in months, despite having a dozen capsules still lying around. A Senseo that I owned some ten or fifteen years ago produced outright poor coffee, having a shorter preparation time as the sole benefit compared to a drip brew.

    **In the other discussed cases, I pass no moral judgment: That businesses try to gear customers towards more profitable products is only natural, while the customer does gets something in return and often still has a choice. The result might or not might not be unfortunate for the customer, but at least there is only rarely an ethical wrong-doing. With examples like toilet paper, the customer is left with no improvement and no choice—and is forced to pay the additional and unnecessary cost.

    ***One segment of four-ply is more expensive than two segments of two-ply, etc., even though the overall weight and volume is virtually the same, and even though the customer could just fold the two two-ply segments over another for what amounts to four-ply.

    Without such artificial market alterations, life could be a whole lot cheaper.

  4. A partially overlapping area is convenience products that reduce the work-load for the customer at an increase in monetary costs. This is most notable when it comes to food, where e.g. very few people bake their own breads and whatnots today, because the convenience of store-bought alternatives almost always outweighs the additional* costs—and despite own baking once being almost a given.** Indeed, most bread loaves appear to be sold even pre-sliced today—unlike just a few decades ago.*** Coffee was regularly ground by hand in earlier days; today, it is mostly**** bought pre-ground. “TV dinners” can reduce effort considerably, but are a lot more expensive than own cooking. Etc.

    *In this specific area, we might have reached a point where even the monetary cost of own baking exceeds the price of ready-made products; however, if so, this is not generally true and it was not originally true in this area either.

    **Indeed, further back, even more elementary steps (e.g. grinding flour) might have taken place at home; while subsistence farmers might even have provided most of the ingredients.

    ***Here the additional cost in the process is likely to be very small; however, the customers are potentially hit from another angle: Pre-slicing reduces the expected “best before” date.

    ****And the exceptions are likely almost exclusively for use in coffee machines that automatically grind beans.

    As an aside, these convenience products do not only bring a money–effort trade-off, but often result in less choice and/or suboptimal products. Consider e.g. the German pre-sliced cheese vs. the block cheese for manual slicing that is common in Sweden—to me, the former slices are too thick, simultaneously reducing how long a given quantity of cheese lasts and making sandwiches less healthy. Or consider the often quite poor nutritional profiles of TV dinners compared to own cooking.

  5. Luxury and brand products is an area bordering on the perfidious: Often these come with a value added; often they do not; and only very rarely is the value added comparable to the price hike. For the rich, this is not much of an issue; however, even the “middle class” is often well-advised to stay away from brand products without a plausible real* value added. Unfortunately, a liking for brand, or even luxury, products is quite common even among those who earn little—and here the effects can be outright dire, e.g. when a low-earner spends most off a small yearly surplus on shoes** instead of putting it in the bank for a rainy day.

    *As opposed to e.g. one that is explicitly or implicitly claimed in advertising, or one that only applies to other groups than the actual buyer: If, hypothetically, Nike brings a value-added to an Olympic runner, it is not a given that a junior-high student taking physical education also benefits.

    **To take an extreme fictional example, the infamous Carrie Bradshaw once discovered that she (a) could not afford her apartment, (b) had spent forty-or-so thousand USD on shoes over the years. Generally, she might be a good example in that she likely was not that low-earning, instead creating her recurring economic problems through wasteful living.

    In particular, it is a very great fallacy to assume that “more expensive” also implies “better”.

  6. Attempts to gain through large scale salary/wage increases, as attempted by unions, will not be overly successful without a simultaneous and independent trend towards lower prices (relative earnings). Not only will people with more money have a tendency to spend more,* which drives prices upwards, but the additional cost of work will also have an effect on product prices. Notably, there are often chain effects, e.g. that a wage hike in the mining industry increases metal prices, which increases costs in e.g. the machine industry, both metal prices and machine prices affect the tin-can industry, etc. If we, hypothetically, were to increase wages and salaries with a blanket ten percent, the individual businesses would not just see a ten-percent increase of cost of work—they would also see an increase of almost all other costs. While these other increases might fall well short of the full ten percent, they can still be sizable—and they will lead to a greater price increase on a business’ products than would a similar cost-of-work increase limited to only that business. (Also note e.g. that a three percent wage increase at two percent inflation is slightly better than a ten percent increase at nine percent inflation.)

    *Or e.g. work less to keep income roughly constant with an increase in spare time. Similarly, an employer who must pay his workers more might opt to employ fewer of them, e.g. through use of more automation. Such aspects will be largely left out, for the sake of simplicity.

    To some part, such increases can even amount to a competition between different unions and their members, in that any increase drives prices upwards, and that those with smaller increases will see a larger part killed by the resulting increase in prices. At least in theory, there could even be a net decrease in purchasing power for one union/member connected to the net increase seen by another.

  7. For similar reasons, naive sometime suggestions from the radical Left that everyone should earn the same, that the fortunes of Billy Gates et al. should be confiscated and divided among the people, and similar, will work poorly (even questions like ethics aside): Give people more money and they will (a) buy more, which drives prices up, and/or (b) work less, which forces businesses to page higher wages/salaries, which drives prices up. After a period of fluctuation, the lower earning/less wealthy would be back at roughly* the same purchasing power as before, and little would be gained. At the same time, the incentives to start businesses, come up with inventions, earn money, whatnot would be reduced, which would harm economic growth…

    *It would probably be a bit higher, but by nowhere near as much as expected in a naive calculation. Indeed, in some scenarios, the prices of lower-priced goods are likely to see unusually large increases, which would be particularly harmful. Consider e.g. a simplistic world of poor peasants and rich noblemen, of which the former live on bread and the latter on cake. Turn the noblemen into peasants and divide their money in equal shares among the population—and watch cake prices drop while bread prices increase. Either cake has to grow cheaper, or no-one will now be able to afford it. Bread, meanwhile, will be eaten by more people than before (unless the price decline for cake is very sharp) and the increased competition for this traditionally scarce resource will drive prices up.

As an aside, some of these items allow the customers a degree of own choice and prioritization, and quite a lot of money can be saved by making the more frugal choice.

Excursion on myself and brand products, etc.:
While I do not take frugality to an extreme, I have almost always tried to avoid expenses without a corresponding practical value to me. This includes avoiding brands that are “famous for being famous”, buying lamps* at hardware stores instead of department and pure lamp stores, having no qualms** about going into a “one euro” store, usually preferring the cheaper hotel to the hotel with more stars, and having never owned a car***. Outright luxury items have been quite rare and restricted to times of high income.

*For instance, when new in Wuppertal, I wanted to buy an uplight (?). Asking around, I was directed to a lamp store where prices started around three hundred Euro. I spent the extra time to find a hardware store and bought a perfectly satisfactory specimen at (possibly) sixty Euro. To boot, I found the visual design of the latter to be superior…

**These days, I suspect, few people are hesitant, but in earlier days I have heard strong negative opinions expressed towards these and similar stores, both in terms of perceived product quality and the risk of being seen as a pauper for visiting them. (Quality can be a legitimate concern for some products, but mostly the products are fine enough.)

***I have mostly lived in major cities with decent public transportation, and I prefer to walk when it is reasonably possible. Having a car would rarely have been worth the cost.

This, however, does not mean that I am skimpy when I see a benefit. Most notably, I have repeatedly taken sabbaticals to spend time on studies/writing and to enjoy life—while a year-or-so off work is very expensive, it really brings me something. (I strongly recommend it to those fortunate enough to have the opportunity.) However, I have also had no qualms about living in hotels or temporary apartments when working in other cities, even at distances where most others commute. If I can afford to cut out that extra one-to-two hours a day, with all that extra stress, having to go up earlier in the morning, having to wait longer before I can relax in the evening, etc., then I have a very real benefit. At the same time, I have always adapted to my income, e.g. in that I spend considerably less money on food, eating out, clothes, whatnot today (on a sabbatical) than I did a year ago (working full-time).

Written by michaeleriksson

November 12, 2018 at 1:15 am

Poor user interfaces / gkrellm and battery notifications

leave a comment »

Even the world of software development, even in the Linux and “open source” areas, often shows such signs of incompetence that it boggles the mind. This in particular where usability and workflows are concerned.

Take what I just observed: I tried to activate the low-battery notifications in the monitoring tool gkrellm.* These are activated through a checkbox on one config page, with the actual configuration (what to do when) on another, in a pop-up—it self disputable, considering that there were ample space to put both parts on the same page. I opened the other page, filled in appropriate values, closed the pop-up, and checked the check box to activate the settings. What happens? I am alerted that the settings have changed and that I must now open the pop-up to enter information (I do not recall the exact formulation). As I re-open the pop-up, I see that the settings just manually entered have been arbitrarily restored to empty default values!!!

*I run my notebook attached to an electric socket almost without exception and normally have no need for notifications. However, recently, some type of glitch in the connection at the notebook-side has led to a temporary interruption on several occasions—and today I awoke to find that my notebook had ran its battery down and shut off during the night.

Not only is this an extremely user-hostile restriction on the workflow, it is also not communicated in advance, the reset of the values borders on the inexcusable per se,* and (with some reservations for the detail implementation) this could prevent one of the most obvious uses of the settings—to keep a constant set of detail settings that are activated or deactivated as the situation fits.**

*With few exceptions, values explicitly set by a user should never be changed automatically or as an unexpected side-effect of a manual action. A similar example is the cookie settings in Firefox: If someone allows cookies per the main option, chooses the sub-option to disallow third-party cookies (as he should!), and later deactivates cookies per the main option, then a later reactivation of cookies will also change the sub-option to allow third-party cookies, which is counter-intuitive, against the stated will of the user, and to boot a poor default setting. (Disclaimer: I have not verified that this mis-behavior is still present in recently re-vamped Firefox.)

**E.g. in that someone using a notebook on a daily commute might have them off (as an unnecessary disturbance, because he knows that he will arrive before the battery runs down), but have them on when otherwise working without external power.

Generally, gkrellm has a very odd and unexpected user interface, be it in the main product, the various plug-ins individually, or the various plug-ins compared to another. (The F1-key opens the configuration, not the help. Sometimes a mere click on a plug-in activates or deactivates a feature; sometimes it does not. Extra information is sometimes displayed when clicking minuscule squares with no functional identification. The configuration is usually poorly though-through. Etc.)

Written by michaeleriksson

November 10, 2018 at 4:08 pm

Prose and “Der Untergang des Abendlandes”

with 4 comments

I find myself unexpectedly returning to prose and style of writing—after having intended to deepen my understanding of history and societal development: Yesterday, I started to read Oswald Spengler’s “Der Untergang des Abendlandes”, which, going by reputation, should have contained a fair amount of material of interest to me. After about a hundred pages, most of which consisted of a foreword, I gave up in frustration—the man simply could not write. (And the overall work is two volumes of more than six hundred pages each.)

Ideas, definitions, and arguments are drawn out ad eternam. What could have been stated in a single ordinary* sentence covers an entire paragraph—or more. The total contents of these hundred-or-so pages could be compressed to ten. (If the rest of the work is of a similar character, I could have been more than three-quarters through a compressed version with the same effort.)

*As opposed to the often very long sentences used by Spengler, which can be paragraph-sized in their own right. (Also see below.)

The flow of the individual sentence is often highly confused, reminding me of a compass needle in the presence of magnetic disturbances. Hypothetical* example: A horse is a four-footed, in other words quadruped, animal, excelling in speed, contrary to the cow, whose digestive system is of the utmost complexity, and ridden, i.e. used as a means of transport, by humans, or dogs in a circus, the cow hardly ever being ridden, …

*Considering his complicated style and issues of idiom, I am loath to actually attempt a translation of a real example. Besides, I would need to make a re-download to find such an example. Note that I have not attempted to duplicate his style in any detail—I just try to bring the general impression of the compass needle across.

As for sentence structure, sentence length, and choice of words, he makes me look like Hemingway. I do not like to throw the first stone here, both because of the hypocrisy involved and because many failures to understand a sentence can be put more on the reader than on the writer. However, I readily admit that there were sentences that I had to re-read even to just understand them as sentences (as opposed to understand the idea or arguments presented by them—and the ideas and arguments were usually not hard to understand once the sentence had been deciphered). In a few cases, a sentence was also so long that I had to go back to the beginning in order to replenish my memory and to be able to put the end of the sentence in context…

The “reasoning” often consists of nothing more than claiming that something would be obvious, often drowned in a barrage of words. Spengler appears to continually confuse “personal belief” with “logical conclusion” and/or attempt to hide a lack of actual arguments through a flow of words.

Excursion on the actual contents:
Because I covered so small a portion of the overall work, I cannot make that many statements about the actual contents (as opposed to how the contents were written). However: On the one hand, Spengler and I seem to share a conviction that there are many lessons and, possibly, predictions to be drawn from past civilizations and phases of individual periods and fields*. (Also note sayings like “history repeats it self” and “those who do not know their history are doomed to repeat it”.) I also share the general fear that the “Abendland” could, conceivably be approaching its “Untergang”; and the general idea that progress might be replaced by stagnation as a civilization develops.** On the other, his “Morphologie”*** takes this to such an extreme that it lacks plausibility and would likely be considered pseudo-science today. Going by a few tables with comparisons between civilizations, I also suspect that he has bent the data to fit his theory on more than one occasion. (Something almost impossible to avoid with the great difference in the developments that are considered morphologically equivalent…)

*For instance, I suspect that there are great similarities in the rise, flowering, and fall of this-or-that style of painting or music—not just empires.

**I note factors like that a lesser need to work hard in order to survive could lead a “softer” and less industrious population, that entertainment could grew more important than accomplishment, the risks of dysgenic pressure, and similar.

***Roughly speaking, that the development of a civilization follows a certain fix pattern with (on a historical scale) synchronously repeating stages of even areas like math and art. Unless the unread parts of the work contains strong arguments and examples, I see this as much too far-going.

Excursion on predictions:
Future prediction based on history should always be taken with a grain of salt—Asimov’s psychohistory will likely remain more fiction than science. A good example is H. G. Wells’ “The Shape of Things to Come”, which gets almost everything wrong—and when it gets something partially right, the flaws render the prediction almost comical. For instance, he does manage to predict a German–Polish war with far-reaching consequences around 1940, half-a-dozen years past the time of writing, but has the Germans barely able to keep up with the Poles and, in my recollection, had the Poles as the original aggressors.

Written by michaeleriksson

November 8, 2018 at 12:10 pm

Posted in Uncategorized

Tagged with , , , ,