Michael Eriksson's Blog

A Swede in Germany

German taxes and Elster III

with one comment

In a telling development of what prompted my original post (that I just had wasted several hours trying to use the third-rate Elster products to file my VAT):

I recently received a notification from the “IRS” that because I had exceeded the normal deadline, caused exclusively by their incompetence, they would book me a 35-Euro late fee.

They prevent me from fulfilling the rules they impose to their one-sided advantage, they waste hours of my time, they bring me to the point that I want to throw my notebook at the wall—and I have to pay them…

Of course, the complaint I just filed took another fair bit of time—and forced me to use Elster again…

I can only re-iterate that the situation is utterly inexcusable. Elster, the overall tax system, and the German IRS all need to be completely over-hauled or replaced by something better.

Advertisements

Written by michaeleriksson

January 7, 2018 at 4:40 pm

Posted in Uncategorized

Tagged with , , , ,

Meltdown and Spectre are not the problem

leave a comment »

Currently, the news reporting in the IT area is dominated by Meltdown and Spectre—two security vulnerabilities that afflict many modern CPUs and pose a very severe threat to at least* data secrecy. The size of the potential impact is demonstrated by the fact that even regular news services are paying close attention.

*From what I have read so far, the direct danger in other regards seems to be small; however, there are indirect dangers, e.g. that the read data includes a clear-text password, which in turn could allow full access to some account or service. Further, my readings on the technical details have not been in-depth and there could be direct dangers that I am still unaware of.

However, they are not themselves the largest problem, being symptoms of the disease(s) rather than the disease it self. That something like this eventually happened with our CPUs, is actually not very surprising (although I would have suspected Intel’s “management engine”, or a similar technology, to be the culprit).

The real problems are the following:

  1. The ever growing complexity of both software and hardware systems: The more complex a system, the harder it is to understand, the more likely to contain errors (including security vulnerabilities), the more likely to display unexpected behaviors, … In addition, fixing problems, once found, is the harder, more time consuming, and likelier to introduce new errors. (As well as a number of problems not necessarily related to computer security, notably the greater effort needed to add new features and make general improvements.)

    In many ways, complexity is the bane of software development (my own field), and when it comes to complicated hardware products, notably CPUs, the situation might actually be worse.

    An old adage in software development is that “any non-trivial program contains at least one bug”. In the modern world, we have to add “any even semi-complex program contains at least one security vulnerability”—and modern programs (and pieces of hardware) are more likely to be hyper-complex than semi-complex…

  2. Security is something rarely prioritized to the degree that it should be, often even not understood. In doubt, “Our program is more secure!” is (still) a weaker sales argument than “Look how many features we have!”, giving software manufacturers strong incentives to throw on more features (and introduce new vulnerabilities) rather than to fix old vulnerabilities or to ensure that old bugs are removed.

    Of course, more features usually also lead to greater complexity…

  3. Generally, although not necessarily in this specific case: A virtual obsession with having everything interfacing with everything else, especially over the Internet (but also e.g. over mechanisms like the Linux D-bus). Such generic and wide-spread interfacing brings more security problems than benefit; for reasons that include a larger interface (implying more possible points of vulnerability), a greater risk to accidentally share private information*, and the opening of doors for external enemies to interact with the software and to deliberately** send data after a successful attack.

    *Be it through technical errors or through the users and software makers having different preferences. For an example of the latter, consider someone trying to document human-rights violations by a dictatorship, and who goes to great length to keep the existence of a particular file secret, including keeping the file on an encrypted USB drive and cleaning up any additional files (e.g. an automatic backup) created during editing. Now say that he opens the file on his computer—and that the corresponding program immediately adds the name and path of the document to an account-wide list of “recently used documents”… (Linux users, even those not using an idiocy like Gnome or KDE, might want to check the file ~/.local/share/recently-used.xbel, should they think that they are immune—and other files of a similar nature are likely present for more polluted systems.)

    **With the particularly perfidious variation of a hostile maker of the original software, who abuses an Internet connection to “phone home” with the user’s private information (cf. Windows 10), or a smart-phone interface to send spam messages to all addresses in the user’s address book, or similar.

To this might, already or in the future, government intervention, restrictions, espionage, whatnot, be added.

The implications are chilling. Consider e.g. the “Internet of things”, “smart homes”, and similar, low benefit* and high risk ideas: Make your light-bulbs, refrigerators, toasters, whatnot, AIs and connect them to the Internet and what will happen? Well, sooner or later one or more of them will be taken over by a hostile entity, be it a hacker or the government, and good-bye privacy (and possibly e.g. money). Or consider trusting a business with a great reputation with your personal data, under the solemn promise that they will never be abused: Well, the business might be truthful, but will it be sufficiently secure for sufficiently long? Will third-parties that legitimately** share the data also be sufficiently secure? Do not bet your life on it—and if you “trust” a dozen different businesses, it is just a matter of time before at least some of the data is leaked. Those of you who follow security related news will have noted a number of major revelations of stolen data being made public on the Internet during the last few years, including several incidents involving Yahoo and millions of Yahoo users.

*While there are specific cases where non-trivial benefits are available, they are in the minority—and even they often come with a disproportional threat to security or privacy. For instance, to look at two commonly cited benefits from this area: Being able to turn the heating in ones apartments up from the office shortly before leaving work, or down from a vacation resort, is a nice feature. Is is more than a nice-to-have, however? For most people, the answer is “no”. Do I actually want my refrigerator to place an order with the store for more milk when it believes that I am running out? Hell no! For one thing, I might not want more milk, e.g. being about to leave for a vacation; for another, I would like to control the circumstance sufficiently well myself, e.g. to avoid that I receive one delivery for (just) milk today, another for (just) bread tomorrow, etc. For that matter, I am far from certain that I would like to have food deliveries be a common occurrence in the first place (for reasons like avoiding profile building and potential additional costs).

**From an ethical point of view, it can be disputed whether this is ever the case; however, it will almost certainly happen anyway, in a manner that the business considers legitimate, the simply truth being that it is very common for large parts of operations to be handled by third-parties. For example, at least in Germany, a private-practice physician almost certainly will have lab work done by an external contractor (who will end up with name, address, and lab results of the patient) and have bills handled by a factoring company (who will end up with name, address, and a fair bit of detail about what took place between patient and physician)—this despite such data being highly confidential. Yes, the patient can refuse the sharing of his data—but then the physician will just refuse taking him on as a patient… To boot, similar information will typically end up with the patient’s insurance company too—or it will refuse to reimburse his costs…

On paper, I might look like a hardware makers dream customer: In the IT business, a major nerd, living behind the keyboard, and earning well. In reality, I am more likely to be a non-customer, to a large part* due to my awareness of the many security issues. For instance, my main use of my smart-phone is as an alarm clock—and I would not dream of installing the one-thousand-and-one apps that various businesses, including banks and public-transport companies, try to shove down the throat of their customers in lieu of a good web-site or reasonably customer support. Indeed, when we compare what can be done with a web-site and with a smart-phone app (in the area of customer service), the app brings precious little benefit, often even a net detriment, for the customer. The business of which he is a customer, on the other hand, has quite a lot to gain, including better possibilities to control the “user experience”, to track the user, to spy on other data present on the device, … (All to the disadvantage of the user.)

*Other parts include that much of the “innovation” put on the market is more-or-less pointless, and that what does bring value will be selling for a fraction of the current price to those with the patience to wait a few years.

Sadly, even with wake-up calls like Meltdown and Spectre, things are likely to grow worse and our opportunity to duck security risks to grow smaller. Twenty years from now, it might not even be possible to buy a refrigerator without an Internet connection…

In the mean time, however, I advice:

  1. My fellow consumers to beware of the dangers and to prefer more low-tech solutions and less data sharing whenever reasonably possible.
  2. My fellow developers to understand the dangers of complexity and try to avoid it and/or reduce its damaging effects, e.g. throw preferring smaller pieces of software/interfaces/whatnot, using a higher degree of modularization, sharing less data between components, …
  3. Businesses to take security and privacy seriously and not to unnecessarily endanger the data or the systems of their customers.
  4. The governments around the world to consider regulations* and penalties to counter the current negative trends and to ensure that security breaches hurt the people who created the vulnerabilities as hard as their customers—and, above all, to lay off idiocies like the Bundestrojaner!

    *I am not a friend of regulation, seeing that it usually does more harm than good. When the stakes are this high, and the ability or willingness to produce secure products so low, then regulation is the smaller of the two evils. (With some reservations for how well or poorly thought-through the regulations are.)

Written by michaeleriksson

January 7, 2018 at 1:08 am

Eternal September? I wish! (And some thoughts on email)

leave a comment »

One of the most unfortunate trends of the Internet is that erstwhile standard procedures, behaviors, whatnot are forced out by inferior alternatives, as an extension of the Eternal September. Indeed, the point where even the Eternal September can be viewed with some nostalgia has long been reached:

The name arose through a combination of how, every September, the Internet would see a sudden burst of college freshmen, who still needed to learn how to handle themselves and who were an annoyance to older users until they had done so; and how the popularization of the Internet outside of college caused this inflow of unversed users to take place over the entire year. Even so, in the early days, the new users tended to be of over-average intelligence, tech affinity, and/or willingness to adapt—and many could still continuously be made to leave their newbie status behind. The problem with the Eternal September was its Hydra character: Cut of one head and it grew two new.

Today’s situation is far, far worse: There is no filtering* of who uses the Internet, be it with regard to intelligence, technical understanding, willingness to learn from more senior users, …; and, by now, the vast majority of all users are stuck in a constant newbie state. Indeed, we have long reached a point where those who have been on the Internet since before the problems became overwhelming** are viewed as weirdos for doing things the right way***. Worse: Websites are usually made for the “lowest common denominator”, with regard to content, language****, and interface, making them far less interesting than they could be to the old guard. This is paralleled in a number of negative phenomena on the Internet (and, unfortunately, IT in general): Consider e.g. how much less profitable it would be to spam a collective of STEM students than a random selection of the overall population, or how much less successful an Internet-based virus among the tech savvy.

*A formal filter, a legal restriction, an equivalent of a driver’s license, or similar, was not in place before the Eternal September either. However, Internet access outside of higher education was reasonably rare, and even within higher education it was far more common in STEM areas than in e.g. the social sciences. Correspondingly, there was an implicit filter that made it far more likely for e.g. a math major to have Internet access than for e.g. a high-school drop-out.

**The linked-to Wikipedia page puts 1993 as the start date in the U.S., but other countries like trailed somewhat. I started college in 1994 and the situation was reasonable for a few years more, before the Internet boom really started—after which is has been downhill all the way.

***Note that while there is some arbitrariness to all rules and there is usually more than one legitimate way to handle things, there is at least one important difference between the “old ways” and the “new ways” (even aside from the benefit of continuity and consistency, which would have been present with the old rules): The old ways were thought-out by highly intelligent people and/or developed by trial-and-error to a point where they worked quite well—the new are a mixture of complete arbitrariness; ideas by less intelligent and less experienced users, or even managers of some software company; attempts to apply unsuitable “real-world” concepts to the online world; … To this must be added the technical side: Someone who understands it, all other factors equal, is objectively better off that someone who does not—and less of a burden to others.

****Even Wikipedia, once an exemplary source of good writing, has gone downhill considerably, with regard to both grammar and style. (Notably, the “encyclopedic writing” aspect is increasingly disappearing in favor of a more journalistic or magazine style. I have long had plans for a more detailed post on Wikipedia, including topics like an infestation with unencyclopedic propaganda, but have yet to get around to it.)

A particularly depressing aspect, but great illustration of the more general problems, is the (ab-)use of email by many businesses, government institutions, and similar, who simply do not understand the medium and how to use it properly. No, I am not talking about spam here (although spam is a gross violation)—I am talking about everyday use.

Consider e.g.:

  1. That many businesses and virtually all (German) government institutions fail to quote the original text when replying and replace the subject line with something to the effect of “Your message from [date]”.

    The former makes it harder to process the message, in particular when a re-reply is needed, often forcing the user to open several other messages to check for past contents; the latter makes it much harder to keep messages together that belong together*, to find the right reply to an email, identify why a reply was sent**, etc. To boot, these problems likely contribute to poor customer service through creating similar issues on the other end, e.g. through a member of customer support not having all the information present without looking through old emails or some ticket system: Even if the information is there, access will be slower, more resources will be wasted, and there is a major risk that important information will still be missed.

    *Not only manually, but also through giving automatic “threading” mechanisms in email clients an unexpected obstacle.

    **When the original text is not included, this becomes even harder. In a worst-case scenario, when several emails were sent to the same counter-part on the same day (rare but happens), it might not even be possible to make the correct match—or only through comparing various IDs in the message headers. The latter is not only much more effort than just looking at subject lines, it also requires that all involved software components have treated them correctly, that the counter-part has used them correctly, and that the user knows that they exist…

    The explanation for this absolutely amateurish and destructive behavior is almost certainly that they have never bothered to learn how to handle email, and just unreflectingly apply methods that they used with “snail mail”* in the past. This is the more absurd since going in the other direction, and altering some snail mail procedures in light of experiences with email, would be more beneficial.

    *This phrase gives another example of how the world can change: Twenty years ago, I could use the phrase and simply assume that the vast majority of all readers would either know that I meant “physical mail sent by the post”—or be willing both to find out the meaning on their own and to learn something new. Today, while typing the phrase, I am suddenly unsure whether it will be understood—and I know that very many modern Internet users will not be willing to learn. I might be willing to give the disappearance of the phrase a pass: We can neither expect every phrase ever popular to continue to be so in the long term, nor the phrases of any group to be known in all other groups. However, the attitude towards learning and own effort is a different matter entirely.

  2. When messages are quoted, established rules are usually ignored entirely, especially with regard to putting the quote ahead of the answer and to intermingle quote and reply, which makes an enormous difference in the ease of processing the reply. Some tools, notably MS Outlook, more-or-less force a rule violation on the users… When quote and reply are intermingled it is usually not done in the established manner, with separating new lines and use resp. non-use of a “> ” prefix; instead, the new text is simply written straight into the old and separated only by a highly problematic* use of a different color.

    *Among the problems: The colors are not standardized. The result becomes confusing as to who wrote what in what order after just a few back-and-forths, to the point of making a lengthier email discussion almost impossible (whereas it was one of the main uses of email in the days of yore). It forces the use of HTML emails (cf. below). There is no guarantee that the colors will be visible after printing or a copy-and-paste action to another tool (notably a stand-alone editor). Not all email clients will be able to render the colors correctly (and they are not at fault, seeing that HTML is not a part of the email specifications). Generally, color should not be used to make a semantic differentiation—only to highlight it. (For instance, in the example below, an email client could automatically detect the various “>” levels and apply colors appropriately; however, the “>” signs must remain as the actual carriers of meaning.)

    To give a (simplistic and fictional) example of correct quoting:

    >>> Have you seen the latest “Star Wars” movie?
    >> No.
    > Why not?
    The one before that was too disappointing.

  3. Ubiquitous use of “no-reply” addresses: Anyone who sends an email has a positive duty to ensure that the recipient can reply to this email. This includes event-generated automatic messages (e.g. to the effect of “we have received your email” or “your package has just been sent”) and news letters. Either make sure that there is someone human able to read the replies or do not send the email at all.* This is not only an ethical duty towards the recipient, it is also a near must for a responsible sender, who will be interested in e.g. tracking resulting failures.

    *The exact role of this human (or group of humans) will depend on the circumstances; often it will be someone in customer service or a technical administrator.

  4. Abuse of email as just a cost-saver relative snail mail: There is nothing wrong with sending relevant attachments in addition to an email text, and in some cases, e.g. when sending an invoice*, even a more-or-less contentless email and an attachment can be OK (provided that the recipient has consented). However, some take this to an absurd extreme, notably the outrageously incompetent German insurance company HUK, and write a PDF letter containing nothing that could not be put in an email, attach the resulting file to the email, and use a boiler-plate email text amounting to “please open the attachment to read our message”. This, obviously, is extremely reader hostile, seeing that the reader now has to go through several additional steps just to get at the main message** and it ruins the normal reply and quote mechanisms of email entirely. To boot, it blows up the size of the message to many times what it should be*** and increases the risk of some type of malware infection.

    *This especially if the contents of the invoice are to some degree duplicated in the email proper, including total amount, time period, and due date (but more data is better). Writing an invoice entirely as a plain-text email is possible, and then the attachment would be unnecessary; however, there can be legitimate reasons to prefer e.g. PDF, including a reduced risk of manipulation, a more convincing and consistent visual appearance if a hard-copy has to be presented to the IRS, and an easier differentiation between the invoice proper and an accompanying message. (There might or might not be additional legal restrictions in some jurisdictions.)

    **Note that it is not just a matter of one extra click to open that one attachment that one time. Consider e.g. a scenario of skimming through a dozen emails from the same sender, from two years back, in order to find those dealing with a specific issue, and then to extract the relevant information to clarify some new development: If we compare a set of regular emails and a set of emails-used-to-carry-PDFs, the time and effort needed can be several orders larger for the latter. Or consider how the ability to use a search mechanism in the email client is reduced through this abuse of email.

    ***This is, admittedly, less of an issue today than in the past (but HUK has been doing this for a very long time…). Still there are situations where this could be a problem, e.g. when a mailbox has an outdated size limit. It is also a performance issue with regard to other email users: The slow-down and increase in resource use for any individual email will be relatively small; however, in the sum, the difference could be massive. What if every message was blown-up by a factor of 10, 100, 1000, …? What would the effects on the overall performance be and what amount of band-width and processing power (especially if spam or virus filters are applied) would be needed? For instance, the two emails at the top of my current mailbox are, respectively, an outgoing message from me at 1522 Byte and the reply to said message at 190 Kilo(!)byte—roughly 125 times as much. The lion’s part of the difference? A two-page PDF file…

  5. Use of HTML as an email format: Such use should, on the outside, be limited to recipients known both to handle the emails in a compatible manner and to be consenting: HTML is not supported by all email clients, and not in the same manner by all that do. It poses an additional security and privacy risk to the recipient. It bloats the message to several-to-many times the size it should be. It makes offline storage of the email more complicated. It makes it harder to use standard reply and quoting mechanisms. The risk of distortion on the way to the recipient is larger. … Notably, it also, very often, makes the email harder to read, through poor design.

    To boot, the reason for the use is usually very dubious to begin with, including the wish to include non-informative images (e.g. a company logo), to try to unethically track the recipients behavior (e.g. through including external images and seeing what images is retrieved when), or to make the message more aesthetic*. A particular distasteful abuse is some newsletters that try emulate the chaotic design of a commercial flyer or catalog, which often deliberately try to confuse the readers—either the newsletter senders are incompetent or they try to achieve something incompatible with the purpose of a newsletter**.

    *This is simply not the job of the sender: The sender should send his contents in a neutral form and the rendering should be done according to the will of the recipient and/or his email client—not the sender. Efforts to change this usually do more harm than good.

    **Most likely, but not necessarily, to use it as advertising. I note that while newsletters are often unwelcome and while the usual automatic addition of any and all customers to the list of recipients is despicable, the abuse of a newsletter for advertising is inexcusable: Many will consent to being or deliberately register as recipients because they are interested in news about or from the sender; and it is a gross violation of the trust placed in the sender to instead send them advertising.

    There are legitimate cases where a plain-text email is not enough to fulfill a certain use-case; however, they are rare and usually restricted to company-internal emails. For instance, one of the rare cases when I use HTML emails is when I want to send the tabular result of a database query to a colleague without having to use e.g. an Excel attachment—and even this is a border-line spurious use: In the days of yore, with some reservations for the exact contents, this could have been done equally well in plain-text. Today it cannot, because almost all email readers use a proportional font and because some email clients take inexcusable liberties with the contents*.

    *For instance, Outlook per default removes “unnecessary” line-breaks—and does so making assumptions that severely restrict the ability to format a plain-text document so that it actually is readable for the recipient.

    Of course, even assuming a legitimate use-case, it can be disputed whether specifically HTML is a good idea: Most likely, the use arose out of convenience or the misguided belief that HTML was a generic Internet format (rather than originating as a special purpose language for the Web). It would have been better to extend email with greater formatting capabilities in an ordered, centralized, and special-purpose* manner, as has been done with so many other Internet related technologies (cf. a great number of RFCs).

    *Which is not automatically to say that something should be developed from scratch or without regard for other areas—merely that it should be made to suit the intended purpose well and only the intended purpose. Some member (or variation of a member) of the ROFF-family might have been suitable, seeing that they are much closer to plain-text to begin with.

  6. A particularly idiotic mistreatment of emails is exemplified by Daimler and in recent discussion for another large auto-maker (which one, I do not recall):

    If an email is sent to an employee at the wrong time, e.g. during vacation, the email is simply deleted…

    The motivation given is absurd and shows a complete lack of understanding of the medium: This way, the private time of the employees would be protected. To make matters worse, the “threat” comes not from the outside but from a (real or imagined) pressure from within the company to always be available. In effect, Daimler has created a problem, and now tries to solve this problem through pushing the responsibility and consequences onto innocent third parties.

    Email is by its nature an asynchronous means of communication; and one of its greatest advantage is that the sender knows that he can send a message now, even outside of office hours or during vacation periods, and have it handled on the other end later. He does not have to consider issues like whether the recipient (if a business) is open or (if a person) is at home with his computer on. Moreover, the “later” is, with some reservations for common courtesy and stated dead-lines, determined by the recipient: He can chose to handle the email in the middle of his vacation—or he can chose to wait until he is back in the office. Whichever choice he makes, it is his choice; and if he chooses the former against his own best interests, well, then he only has himself to blame.

    By this utterly ridiculous rule, one of the greatest advantages of email is destroyed. To boot, it does this by putting an unfair burden on the sender, who is now not only required to re-send at a later and less convenient time—but who can see a number of additional disadvantages. Assume e.g. that the sender is about to head for his vacation, sends an important and urgent email, goes of the grid for two weeks, and comes back to see that his email has not even been read. Or take someone who writes a lengthy email and loses* any own copy after sending—should he now be required to re-type the entire thing, because of a grossly negligent policy of the recipient’s? Or what happens when employees in very different time zones or with very different vacation habits try to communicate with each other? Should the one work during his normal off-hours or vacation so that the other can receive the email during his time in the office? What happens if the notification** of “please send again” from company A is it self deleted by company B?

    *Disk crashes and accidental deletes happen; I have worked with email clients that do not automatically save sent emails; and, in the spirit of this post, not all users actually know how to retrieve sent emails that are saved…

    **Daimler apparently at least has the decency to send such notifications. I would not count on all copy-cats to follow suit.

    Want to keep your employees from reading company emails in their spare time? Do not give them email access from home or do cut it off during those times when no access is wanted! The way chosen by Daimler turns the reasonable way of handling things on its head—to the gross detriment of others. (This even assuming that the intended goal is legitimate: These are adults. We could let them chose for themselves…)

Written by michaeleriksson

January 5, 2018 at 12:54 am

Posted in Uncategorized

Tagged with , , , ,

Iceland, irrational laws, and feminist nonsense

leave a comment »

As I learned today, there has been a highly negative development and dangerous precedent in Iceland:

An extremely unwise new law requires “equal” pay between men and women*. This is a good example of the problems with a mixture of democracy and stupid/uninformed voters resp. stupid/uninformed/populist politicians; and equally why it is important to have “small government”, with governmental interference limited to what is necessary—not what buys more votes. Further, it is a good example of how a “noble” cause does more harm than good to society.

*The linked-to article uses the absurdly incorrect formulation “legalise”, which would imply that it would be legal to have equal pay. Presumably, the author intended some variation of “legislate”. (If not ideal, at least much better than “legalise”.)

There are at least the following problems involved:

  1. It falls into the trap of the obnoxious and extremely misleading “77 cents on the dollar” lie. Men and women already have equal pay for equal work in very large parts of the world, including Iceland (and Sweden, Germany, the U.S., …) In fact, in as far as there are differences, they actually tend to favour women… Only by making unequal comparisons by failing to adjust for e.g. hours worked, qualifications, field of work, …, can such nonsense like the “77 cents on the dollar” lie even gain a semblance of truth. Cf. below.
  2. It fails to consider aspects like skill at negotiation and willingness to take risks. Cf. an earlier post.
  3. It risks, as a consequence of the two previous items, to give women a major artificial advantage and men a corresponding disadvantage. Basically, if feminist accounting would eventually find “100 cents on the dollar”, a true accounting would imply “130 cents on the dollar”, given women a de facto 30 % advantage instead of the current alleged male 30 % advantage implied by “77 cents on the dollar”).
  4. Judging whether two people actually do sufficiently similar jobs that the same remuneration is warranted is extremely tricky, and the law risks a great degree of arbitrariness or even, depending on details that I have not researched, that differences in remuneration between people on different performance levels shrink even further*.

    *In most jobs, and the more so the more competence they require, there is a considerable difference between the best, the average, the worst of those who carry the same title, have the same formal qualifications, whatnot. This is only very rarely reflected in payment to the degree that it should be (to achieve fairness towards the employees and rational decision making among employers). In software development, e.g., it is unusual that the difference in value added between the best and worst team member is less than a factor of two; a factor of ten is not unheard of; and there are even people so poor that the team would be better off without their presence—they remove value. Do salaries vary similarly? No…

  5. For compliance, “companies and government agencies employing at least 25 people will have to obtain government certification of their equal-pay policies”. The implication is considerable additional bureaucracy and cost for these organizations and likely, again depending on details I have not researched, the government it self.

    To boot, this is exactly the type of regulation that makes it hard for small companies to expand, and that gives the owners incentives to artificially limit themselves.

    From the reverse angle, for those who actually support this law, such vagueness could weaken* the law considerably—while keeping the extra cost and bureaucracy. Similarly, if the checks are actually fair and come to a conclusion that reflects reality, then changes in actual pay levels will be small and mostly indirect—with, again, the extra cost and bureaucracy added.

    *But I would not bet on it being enough to remove the inherit injustice and sexual discrimination it implies.

  6. It opens the doors to similarly misguided legislation, like e.g. a law requiring that certain quotas of women are met by all organisations—even when there are few women who are interested in their fields. (Implying that women would be given better conditions and greater incentives than men in those fields. Incidentally, something that can already be seen in some areas even with pressure stemming just from “public opinion” and PR considerations—not an actual law.)

As to the “77 cents on the dollar” and related misconceptions, lies, misinterpreted statistics, whatnot, I have already written several posts (e.g. [1], [2] ) and have since encountered a number of articles by others attacking this nonsense from various angles, for example: [3], [4], [5], [6], [7].

Simply put: Anyone who still believes in this nonsense is either extremely poorly informed or unable to understand basic reasoning—and any politician who uses this rhetoric is either the same or extremely unethical. I try to remain reasonably diplomatic in my writings, but enough is enough! The degree of ignorance and/or stupidity displayed by these people is such that they truly deserve to be called “idiots”. They are not one iota better than believers in astrology or a flat earth.

Written by michaeleriksson

January 2, 2018 at 9:35 pm

German taxes and Elster II

leave a comment »

Yesterday, I was forced* to spend several hours in one of my least favorite ways: Doing my taxes and using Elster, one of the most horrible web interfaces I have ever encountered. It is quite clear that the makers know nothing of good usability and standard UI paradigms, that they are not well versed with writing web applications, and that they have very little common sense. The amateurishness is absurdly, ridiculously large.

*Due to deadlines at years end. Of course, I could have done this earlier and kept New Year’s Eve free for more pleasant things, but my self-discipline during vacations is lousy—and I would still have had to do the same amount of work.

To look at some specific examples of problems (in addition to an overall extremely poorly thought-through and unintuitive interface and problems already discussed in the linked-to post):

  1. There is no good way to add a free-text explanation to a form*—despite this very often being needed. The main way** is to use a separate message form, which then is tautologically not connected to the original form. This message form can contain a text of some 14, 15 thousand (!) characters—more than enough for any reasonable purpose, one would think. Unfortunately, this text has to be entered in a window of a mere three (!) lines, making the use of an external editor a virtual necessity for any non-trivial text.*** Worse: The text must not contain any line-breaks—an entirely arbitrary and indefensible restriction. Consider the absurdity: I can enter a message that is longer than most of my blog posts, but I am not allowed to enter a line-break anywhere in that message… In doubt, this amounts to the German IRS**** shooting it self in the foot: Good luck with the reading… Almost needless to say, there was no mention of this restriction in advance; it only became apparent when I pasted the completed text—which I then had to modify accordingly.

    *To be understood as the virtual equivalent of a paper form—not e.g. a form in the technical sense of HTML.

    **There is some way to add an additional message to at least some forms; however, this option is only displayed at the very end of the submit process and, to my recollection, requires an MS-Word and/or PDF document. It cannot be added during the actual input processes, it requires considerably more effort than a normal text field, and there is no information given in advance that/whether it will be there.

    ***I very strongly encourage the use of external editors anyway, but the choice should be made by the individual—not the IRS.

    ****For the sake of brevity, I will use “IRS” through-out. This, however, is not an official translation, and the corresponding Germany entities are not perfect analogies of the U.S. IRS.

  2. The button, or more accurately looks-like-a-tab-but-leads-to-an-action element, “Prüfen” (“Check”) should reasonably check the form for inputs, consistency, whatnot, and then return the user to editing the form. It does not… Well, it does do the checks, but it then displays one single button, almost irresistibly hard not to click on before reading it, which leads to a “send” action—something that would release the form for the enjoyment of the IRS and likely cause a number of problems for the user, if he was not actually finished*.

    *For obvious reasons, I have not tried this. It is possible that a renewed submit/send would be possible after amendments; it is possible that it would not be. However, even if the former, there will be more effort involved, and chances are that having multiple submits would over-tax the low-competence IRS.

    No, to resume editing, the user has to go into the line of looks-like-a-tab-but-leads-to-an-action elements and click on the element that amounts to “edit”.

  3. In stark contrast, the looks-like-a-tab-but-leads-to-an-action element “Speichern und Formular verlassen” (“Save and leave form”) does not actually do this, instead presenting the user with three different options—one of which leaves the form without saving… One of the other two allows to continue with editing (the option that should have been, but was not, present for “Prüfen”!); while the third actually does what the original element purported to do: Saves the form and leaves it.

    Interestingly, there is no indication whether the element form continuing the edit saves the document or or continues without saving. However, I do note that there is no separate looks-like-a-tab-but-leads-to-an-action element for the obviously needed action of just saving the form and continuing in one step—despite this being one of the most common actions that a user would reasonably take. (Yes, there are dim-wits who spend two hours editing an MS-Word document between each save; no, this is not how a wise computer users works. Saves should be frequent; ergo, they should be easy to do with a single action.)

  4. Yet another unexpectedly behaving looks-like-a-tab-but-leads-to-an-action element is “Versenden” (“Send”; however, I am not certain that I got the exact German name): This does not send the form; it leads to a check-your-data page with a real send button on it.

    By all means, the step of checking the data is quite sensible. But: Why is the element not called the equivalent of “Check your data and send”? (Contrast this with the previous item, which uses that type of longer name and then fails to perform that action… All in all, the approach to naming elements looks like a game of “pin the donkey” gone wrong.)

  5. The page is so misdesigned* that an important navigation bar on the left only becomes completely visible at 50 (!) % zoom, while being undetectable at 100 % zoom and workable to some approximation at 80 %.

    *Individual experiences could possibly vary based on the browser used. I used TorBrowser 7.0.10, which is an anonymity-hardened version of Mozilla Firefox 52.5.0—a recent long-term-release version of the second most popular browser on the planet: If a web page does not work with a browser like that, something is horribly wrong with the implementation and/or quality assurance of the page.

Believe it or not: This years version, following a re-vamp, was a major improvement over last year’s—despite still being an absolute horror.

To boot, there are a number of problems not (necessarily) related directly to Elster, but to the original conceptions of the old paper forms, the incompetence of the IRS, and/or the overly complicated German tax system. For instance, the main form (“Mantelbogen”) for the tax declaration, needed by everyone, contains a number of pages that apply to only small minorities, e.g. those who have cared for an invalid in their respective homes. In contrast, the “N” form, which is used by all regular employees (i.e. likely an outright majority; definitely a majority among those pre-retirement) is a separate form. Now, I have no objections to the latter, seeing that not everyone* uses the “N” form; however, why not do the same to considerably rarer special cases? Note that while those who do not fall into these special cases can (and should!) simply forego filling these sections out, they still have to read through them in order to verify that nothing has been missed. For instance, the forms require the addition of a number of data items that the IRS already knows (or should know, if they did their job properly), e.g. the total salary paid, the tax-on-salary paid, the amount of unemployment insurance paid, … Requesting this information again not only puts an unnecessary burden on the tax payer, it also introduces a considerable risk of even more unnecessary errors. For instance, even among the forms themselves, there are redundancies (and additional risks of unnecessary errors). In my case, I have to enter information about various VAT amounts in both the VAT declaration and the EÜR (which calculates the taxable earnings); afterwards, I have to copy the taxable earnings from form EÜR into form S by hand. This is not only a potential source of errors, it also implies that I cannot complete the almost independent forms in any order I chose, possibly even get the comparatively short form S out of the way immediately after the year’s end and turn my attentions to the more complex EÜR when I have a bit of vacation.

The whole system is a complete disaster, and I re-iterate what I wrote in the linked-to post: If the tax system and the available tools are so complex/unsuitable/whatnot as they are, then the government should be obligated to pay for “Steuerberater” for all tax payers.

*This includes me for the year 2016 (and 2017). Those self-employed need the “S” form (and the “EÜR” form; and the form for VAT, whatever it is called). Those who, like me in 2015, switch from regular to self-employment during the year need to fill out forms for both cases: N, S, EÜR, the form for VAT—and, naturally, all applicable common forms like the “Mantelbogen” and “Vorsorgeaufwand”.

As a funny/tragic aside:
There appears to have been a modification in how numbers are handled compared to my description in the linked-to post. Back then, I complained that an entry like “123” into a field requiring decimal places was not considered the same as “123,00”, instead resulting in an error message. This time I had the absurd problem that input like “123.45” (copied from a calculator that, naturally, uses a decimal point; whereas German forms use a decimal comma) were automatically turned into “123.45,00”—and then followed by a new error message that no points where allowed. What the hell?!? Firstly, adding the “,00” outright is sub-optimal; it would be better to keep the original value and note that “123,00” is mathematically equivalent to “123”. Secondly, checks for errors should be made before* doing any modifications; if not, there is no telling what the end result is. Thirdly, any modification should be done in a sound manner and the values “123,45” and “123.45,00” simply are not sound—assuming the German system, it would have to either be “12.345” respectively “12.345,00” or a pre-modification rejection. To boot, although more of a nice-to-have, there should be some setting where the user can determine his preference for the semantic of “.” and “,”. This would certainly have saved me a number of edits (and another possible source of errors) of values I rightfully should have been able to copy-and-paste. However, I would not necessarily recommend that the software be changed to allow the use of “thousand separators”—counting them as an error is a potential annoyance; however, it also allows an additional consistency check to prevent the dangerous misinterpretations of “international” numbers.**

*Depending on the exact circumstance, it can be very wise to check afterwards too; however, the “before” check is more critical, because it corresponds to what the user has actually entered. He needs to be given feedback to his own errors and any error remaining to be caught during the “after” check would be the result of errors made by the program.

**Many years ago, I entered something like “16.45” in my online banking, intending to transfer a small amount to pay a bill—and this was automatically turned into either “16.450,00” or “1.645,00”… Fortunately, I caught this change before the final submit. A no-periods-allowed check would have been quite welcome here (as would a does-the-value-make-sense check: “16.45” does not make sense as an input in the German system; just like “16,45” does not make sense in the U.S. system.

Written by michaeleriksson

January 1, 2018 at 10:58 pm

Posted in Uncategorized

Tagged with , , , ,

A few thoughts on traditions and Christmas (and some personal memories)

with one comment

With Christmas upon us, I find myself thinking about traditions* again. This especially with regard to the Christmas traditions of my childhood, in light of this being the first Christmas after the death of my mother.

*Mostly in the limited sense of things that e.g. are done once a year on the same day or in a certain special manner, and as opposed to the other senses, say the ones in “literary tradition” and “traditional role” .

It has, admittedly, been quite a long while since I “came home for Christmas”, as she would have put it, and, frankly, the circumstances of my family had made Christmases at my mother’s hard for me to enjoy long before that. However, while the practical effect is not very large for me, there is still a psychological difference through the knowledge that some possibilities are permanently gone, that some aspects of those Christmases would be extremely hard to recreate—even aside from the obvious absence of my mother, herself. Take Christmas dinner: Even following the same recipes, different people can end up with different results, and chances are that even a deliberate attempt to recreate “her” version would be at best a poor* approximation—just like it was an approximation of what her mother used to make (and my father’s draws strongly on his childhood Christmas dinners). There is simply yet another connection with those Christmases of old that has been cut. In fact, when I think back on the most memorable, most magical, most wonderful Christmases, there are two versions that pop into my head:

*Note that a poor approximation does not automatically imply a poor effort. The point is rather that there are certain tastes and smells that can be important to us for reasons like familiarity and associations with certain memories, and that there can come a point when they are no longer available. I need look no further than my father to find a better cook than my mother, be it at Christmas or on a weekday; however, his cooking is different, just like his signature is—and even if he deliberately tried to copy her signature, the differences would merely grow smaller.

The first, predating my parents divorce, with loving and (tautologically) still married parents, a tree with a certain set of decorations, in the apartment we used to live in, and a sister too young to be a nuisance or even to properly figure in my recollections. I remember particularly how I, possibly around four or five years of age, used to spend hours* sitting next to the tree, staring at and playing with the decorations, and listening to a certain record with Christmas songs**. There was one or several foldable “balls” that I used to fold and unfold until the parents complained, and that fascinated me to no end. I have no idea whether the record and decorations exist anymore, we moved from the apartment almost forty years ago, the parents are long divorced—and I am, obviously, a very different person from what I was back then. With my mother dead, Father is the only remaining connection—and my associations with him and Christmas have grown dominated by those Christmases I spent with him as a teenager. (Which in many ways were great, but could not possibly reach the magic and wonder Christmas holds to a small child.)

*Well, it might have been considerably less—I really had no sense of time back then.

**In a twist, my favorite was a Swedish semi-translation of “White Christmas” by the title “Jag drömmer om en jul hemma”—“I’m dreaming of a Christmas back home”.

The second, likely* post-divorce and living in Kopparberg, where my maternal grand-parents resided, featured a setting in the grand-parents house and the addition of said grand-parents and my uncle and his family to the dramatis personae. Well, the house is torn down, most or all of the furniture and whatnots are gone, the grand-parents are both dead, and on the uncle’s side they started to celebrate separately relatively soon (and I was obviously never as close with them as with my parents or grand-parents). Again, I am a very different person, and with Mother dead, there is virtually no connection left.

*With the long time gone by and my young age, I cannot rule out that some pre-divorce Christmas also fell into this category.

However, memory lane is just the preparatory road, not the destination, today. The core of this post are two, somewhat overlapping, aspects of most traditions that I find interesting:

  1. What we consider traditional is to a very large part based on our own childhood experiences, both in terms of what is considered a tradition at all and what is considered the right tradition. Comparing e.g. my Christmases with my father and mother post-divorce, they had different preferences in both food and decorations* that often (cf. above) went back to their own childhoods. Similarly, U.S. fiction sometimes shows a heated argument over “star on top” vs. “angel on top” (and similar conflicts)—let us guess which of the parties were used to what as children…

    *Although some of the difference in decorations might be based less in preference and more in inheritance of specific objects.

    As for the very young me, I often latched on to something that happened just once or twice as a tradition, being disappointed when the “tradition” did not continue, say when the paternal grand-mother came visiting and did not bring the expected little marzipan piglet.

    Indeed, many traditions simply “run in the family”, and are not the universal and universally central part of, e.g., a Christmas celebration that a child might think. I recall visiting another family at a young age, thanking for dinner like my parents had taught me, and being highly confused when their daughter laughed at me. With hindsight, I cannot blame her: The phrase, “tack för maten och kamraten” (roughly “thanks for the food and the friend”), makes no sense, and is likely something my parents just found to be a funny rhyme—it is certainly not something I can recall having heard anywhere else.

    Even those traditions that go beyond the family can still be comparatively limited, e.g. to a geographical area. Christmas it self has no global standard (even apart from the differentiation into the “Christ is born” and “time for presents and Christmas trees/decorations/food” celebrations). There are, for instance, weird, barbaric countries where they celebrate on the 25th and eat Christmas turkey instead of doing the civilized thing and celebrating on the 24th with Christmas ham. The “Modern Family” episode dealing with the first joint U.S.–Columbian Christmas gives several interesting examples, and demonstrates well how one set of traditions can be weird-bordering-on-freakish to followers of another set of traditions.

  2. Traditions, even those that are nation wide, can be comparably short-lived. Christmas, again, is a great source of examples, with even e.g. the Christmas trees and Santa Clause being comparatively modern introductions, especially in countries that they have spread to secondarily. One of the most important Swedish traditions, for instance, is Disney’s From All of Us to All of You*—first airing in 1960 and becoming a virtually instant tradition, often topping the list of most watched programs of the year.

    *While this might seem extremely surprising, it can pay to bear in mind that Swedish children were starved for animation for most of the remaining year, making the yearly showing the more special. Also note the slow development of Swedish TV, with the original broadcast taking place in a one-channel system, and a two-channel system being in place until well into the 1980s—implying that the proportion of children (and adults) watching was inevitably large. That a TV broadcast of a movie or similar becomes a tradition is, obviously, not without precedent, even if rarely to that degree, with e.g. “It’s a Wonderful Life” and “Miracle on 34th Street” being prominent U.S. examples; and e.g. “Dinner for One” being a New Year’s example in several European countries.

    The entire concept of the U.S.-style Halloween is another interesting example, even when looking just at the U.S. and children (related historical traditions notwithstanding), but the more so when we look at adult dress-ups or the expansion to other countries, including going from zero to something semi-big in Germany within, possibly, the last ten to fifteen years. Fortunately, we are not yet at the point where we have to worry about children knocking on doors and demanding candy, but this might just be a question of time.

    Many traditions, in a somewhat wider sense, are even bound to the relatively short eras of e.g. a certain technology or other external circumstance. Consider, again, TV*: It only became a non-niche phenomenon in the 1950s (possibly even 1960s in Sweden); it was the worlds most dominant medium and one of the most important technologies by the 1980s, at the latest; and by 2017 its demise within possibly as little as a decade seems likely, with the Internet already having surpassed it for large parts of the population. By implication, most traditions that somehow involve a TV can safely be assumed to measure their lives in no more than decades. (Often far less, since many will fall into the “runs in the family” category.) If I ever have children and grand-children (living in Sweden), will they watch “From All of Us to All of You”, punctually at 3 P.M. on December 24th? The children might; but the grand-children almost certainly will not—there is unlikely to even be a broadcast in the current sense. (And even if one exists, the competition from other entertainment might be too large.) Looking in the other direction, my parents might have, but my grand-parents (as children) certainly did not—even TV, it self, was no more than a foreign experiment (and the program did not exist).

    *It is a little depressing, how many traditions in my family have revolved around food and TV—and I doubt that we were exceptional.

    Similarly, how is a traditional cup of coffee made? Well, for most of my life, in both Germany and Sweden, my answer would have been to put a filter in the machine, coffee in the filter, water in the tank, and then press the power button—for a drip brew. However, the pre-dominance of this mode of preparation (even in its areas of popularity) has been short, possibly starting in the 1970s and already being overtaken by various other (often proprietary) technologies like the Nespresso or the Dolce Gusto. The dominant rule might have been less than 30, certainly less than 40 years. Before that, other technologies were more popular, and even outright boiling of coffee in a stove pot might have been the standard within living memory*. Possibly, the next generation will see “my” traditional cup of coffee as an exotic oddity; while the preceding generations might have seen it as a new-fangled is-convenient-but-not-REAL-coffee.

    *My maternal grand-mother (and several other family members) was heavily involved with the Salvation Army. For the larger quantities of coffee needed for their gatherings, she boiled coffee as late as, possibly, the 1990s. While I do not really remember the taste in detail, there was certainly nothing wrong with it—and it certainly beats the Senseo I experimented with some ten years ago.

All of this runs contrary to normal connotations of a tradition—something very lengthy and, preferably, widely practiced. Such traditions certainly exist; going to church on Sunday being a prime example, stretching over hundreds of years and, until the last few decades, most of the population of dozens of countries. However, when we normally speak of traditions, it really does tend to be something more short-lived and more localized. I have e.g. heard adults speak of the “tradition” of dining at a certain restaurant when visiting a certain city—after just several visits… (It could, obviously, be argued that this is just sloppy use of language; however, even if I agreed, it would not change the underlying points.)

Excursion on other areas and nationalism:
Of course, these phenomena are not limited to traditions, but can also include e.g. national or other group characteristics. A common fear among Swedish nationalists (with similarities in other countries) concern the disappearance of the Swedish “identity” (or similar)—but what is this identity? More to the point, is the identity that I might perceive in 2017 the same that one of my parents or grand-parents might have perceived in 1967? Great-grand-parents in 1917? There have been a lot of changes not just in traditions, since then, but also in society, education, values, wealth, work environments, spare time activities (not to mention amount of spare time…), etc., and, to me, it borders on the inconceivable that the image of “identity” has remained the same when we jump 50 or 100 years*. Or look, by analogy, at the descriptions of the U.S. “generations”: While these are, obviously, generalizations and over-simplifications, it is clear that even the passing of a few decades can lead to at least a severely modified “identity”.

*Looking at reasonably modern times. In older times, with slower changes, this was might have been different. (I use “might”, because a lot can happen in such a time frame; and, at least in historical times, there was always something going on over such time intervals, be it war, plague, religious schisms, …, that potentially could have lead to similar variations.)

I strongly suspect that what some nationalists fear is actually losing the familiar and/or what matches a childhood impression: When I think back on Sweden, I often have an idealized image dominated by youthful memories, and this is usually followed with a wish to preserve something like that for eternity, the feeling that this is how the world should be, and this is what everyone should be allowed to experience. While I am rational enough to understand both that this idealized image never matched reality, even back then, and that it there are many other idealized images that would be equally worthy or unworthy, I can still very well understand those who draw the wrong conclusions and would make the preservation a too high priority.

Written by michaeleriksson

December 24, 2017 at 7:37 pm

A paradoxical problem with school

leave a comment »

An interesting paradoxical effect of the current school system is that it simultaneously prevents children from being children and from developing into adults.

The resolution to this paradox is obviously that positive parts of “being children” are suppressed while the negative parts are enforced and prolonged. (Consider also the similar differentiation into child-like and child-ish human characteristics.)

Children in school are severely hindered in (sometimes even prevented from) just enjoying life, playing, walking around in nature, exercising the child’s curiosity, … At the same time, they are being taught just to do what they are told without thinking for themselves or to taking own initiatives, removed from any true responsibility, kept with other children instead of with adults*, … Play and similar activities, when they do occur, are often restricted and “organized fun”. The positive part of being a child is now curtailed around six or seven years of age; the negative is often prolonged into the “children’s” twenties, when they leave college** and get their first jobs—often even moving away from mother for the first time… In contrast, in other times, it was not at all unlikely for teenagers to already have formed families of their own, having children of their own, working at the same tasks as the rest of the adults, etc.***

*Cf. brief earlier discussions on what type of models and examples are presented to children.

**I stress that this is only partially due to the prolonging of studies per se: The more dangerous part is possibly the increasing treatment of college students as children. Cf. e.g. any number of online articles on the U.S. college system, or how Germany has increasingly switched to mandatory-presence lectures in the wake of the Bologna process. (The latter is doubly bad, because it not only reduces the need to take own responsibility, etc.—it also imposes an inefficient way of studying.)

***Indeed, I very, very strongly suspect that the explanation for many of the conflicts between teenagers and their parents are rooted in humans being built for this scenario, with the teenager having a biological drive to assume an adult role and the parent still seeing a little child. Similarly, that some teenagers (especially female ones) treat romantic failures as the end of the world is no wonder—once upon a time it could have been: Today, the boy-friend at age 15 will usually turn out to be a blip on the radar screen—in other times, he was quite likely to be the future (or even current…) father of her children. Similarly, starting over at 17 might have meant that “all the good ones are taken”.

If we compare two twenty-somethings that only* differ in that the one spent his whole life until now in school and the other went through some mix of home-schooling and early work-experience, not even going to college—who will be the more mature, have the better social skills, have more life experience, whatnot? Almost certainly the latter. Of course, the graduate will have other advantages, but it is not a given that they outweigh the disadvantages in the short** term. Why not try to combine the best of both worlds, with a mixture of studies (preferably more independent and stimulating studies) and work*** from an earlier age?

*This is a very important assumption, for the simple reason that if we just pick an average college graduate and an average non-graduate, there are likely to be systematic differences of other types, notably in I.Q. I am not suggesting that non-graduates are automatically superior to graduates.

**In the long term, the graduate will probably catch up—but would he be better off than someone who worked five years after high school and then went to college?

***Here we could run into trouble with child-labor laws. However, these should then possibly be re-evaluated: They are good in as far as they protect children from abuse, unwarranted exploitation, and health dangers; they are bad in as far as they hinder the child’s journey to an adult. I have also heard claimed (but have not investigated the correctness) that such laws had more to do with enabling schooling than they did with child-protection. To the degree that this holds true, they certainly become a part of the problem.

To boot, schooling often gives an incorrect impression of how the world works in terms of e.g. performance and reward. In school, do your work well and you get a reward (a gold star, an “A”, whatnot); in the work-force, things can be very, very different. Want to get a raise? Then ask for a raise—and give convincing arguments as to why you are worth it. The fact that you have done a good job is sometimes enough; however, most of the time, an employer will simply enjoy your work at the lowest salary he can get away with—why should he spend more money to get the same thing? Similarly, where a teacher will have access to test results and other semi-objective/semi-reliable explicit measures of accomplishment, such measures are rarely available to employers. For that matter, if your immediate superior knows that you do a good job, is he the one setting your pay? Chances are that the decision makers simply do not know whether you are doing a good job—unless you convince them.

At the same time, we must not forget that “being children” is also potentially valuable to the children’s development—it is not just a question of having fun and being lazy. On the one hand, we have to consider the benefit of keeping e.g. curiosity alive and not killing it (as too often is the case in school); on the other, there is much for children to learn on their own (at least for those so inclined). As a child, I probably learned more from private reading and TV documentaries than I did in school even as it were—what if I had less school and more spare time? Chances are that I would have seen a net gain in my learning… I am not necessarily representative for children in general, but there are many others like me, and at a minimum this points to the problems with a “one size fits all” approach to school.

Or look specifically at play: An interesting aspect of play is that it is a preparation for adult life, and in some sense “play” equals “training”. It is true that the adult life of today is very different from in, say, the neolithic, but there are many aspects of this training that can still be relevant, including team work, cooperation, leadership, conflict resolution, …—not to mention the benefits of being in better shape through more exercise. These are all things that schools like to claim that they train, but either do not or do so while failing miserably. Chances are that play would do a better job—and even if it does not, it would approach the job differently and thereby still give a benefit. As an additional twist, I strongly suspect that the more active and physical “boy’s play” has suffered more than “girl’s play” in terms of availability, which could contribute to the problems boys and young men of today have. I have definitely read several independent articles claiming that the ADHD epidemic is better cured with more play and an understanding of boys’ needs than with Ritalin (and find the claim reasonable, seeing that ADHD, or an unnamed equivalent, was only a marginal phenomenon in the past).

Excursion on myself:
While I (born in 1975) pre-date the normal border for the “millennial” generation, I have seen a number of problems in my own upbringing and early personality that match common complaints* about millenials or even post-millenials—and for very similar reasons. For instance, I left high school without a clue about adult behavior, responsibilities, skills, …, having never been forced to confront these areas and having never been given much relevant instruction**, be it in school or at home. Once in college, this started to change, notably with regard to own responsibility, but not in every regard. Had I not left the country as an exchange student, thereby being forced to fend for myself in a number of new ways, I would almost certainly have entered the work-force in the state of preparation associated with the millenials. What I know about being an adult, I have mostly learned on my own with only marginal help from school and family***/****—and almost all of it since moving away from home at age nineteen… My sister, length of education excepted, followed an even more millennial path, with even less responsibility at home, a far longer time living with her mother, whatnot, and, as far as I can judge, still has not managed to shake the millennial way—at age forty. Making own decisions and living with the consequences, taking responsibility for oneself or others, not relying on parents to help, understanding from own experience that the world and its population is not perfect, …, these are all things that truly matter to personal development and ability to be an adult—and it is far better to gradually learn to cope from an early age than to be thrown out into the cold as a twenty-something.

*I stress that these complaints can be too generalizing and/or fail to consider the effects of being younger, in general, as opposed to specifically millennial; further, that the problems that do exist are not necessarily equally large everywhere.

**We did have variations on the “home economics” theme, but there was little or no content that I have found to be of relevance to my adult life. To boot, these classes came much too early, with many years going by between the point where (what little there were of) skills were taught and when they would have become relevant to my life—so early that I would still have had to re-learn the contents to gain a benefit. That home-economics teachers are pretty much the bottom of the barrel even among teachers certainly did not help.

***In all fairness, it is not a given that I, personally and specifically, would have been receptive had e.g. my mother tried to give me more advice than she did. This should not serve as an excuse for other parents, however. Other aspects, like having to fend more for myself at an earlier date would have been easily doable—even had I not enjoyed it at the time.

****Sadly, much of what I did pick up from my mother were things that I, in light of later own experiences, ended up disagreeing with, either because of different preferences or because it was not a good idea to begin with.

Written by michaeleriksson

December 22, 2017 at 7:38 pm