Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘language

Brief thoughts on the decline of Latin and Greek as a scientific languages

leave a comment »

When I first, as a child, learned of the use Latin and Greek names for various plants, animals, and whatnot, it was explained to me that this was done to (a) ensure that there was a name that scientists speaking different languages could use and still be understood by each other, (b) still keep the names in a single language.

I am far from certain that this explanation is correct: More likely, the likes of Linnaeus simply started a tradition based on Latin as the then “science language” for his extensive classifications,* which was kept long after Latin lost ground to modern languages.

*Just like I prefer to write in English over Swedish—why not use the language more likely to be understood?

Still, the purported idea is quite sound: Using a single language allows for greater consistency and enables those so interested to actually learn that single language in order to make identification of the item behind the name that much easier;* and Latin has the advantage of lacking** potential for conflict (as might have been the case if English and French or Mandarin and Japanese were pitted against each other).

*Indeed, even a limited knowledge can be a great help, e.g. by knowing a few commonly occurring suffixes and prefixes.

**Or should do so in any sane era: Some politically correct fanatics apparently consider anything relating to “Western Culture” something to be condemned in a blanket manner. Nothing is certain, except for death, taxes, and human stupidity.

Unfortunately, the scientists of old did not stick to Latin, often turning to Greek. (This includes the names of many (most?) dinosaurs.) However this situation was still reasonably tolerable.

Then things started to get out of hand: Over the last few decades names have increasingly been coined in any locale language. For instance, this text was prompted by the recent discovery of the dinosaur genus Ledumahadi—apparently named “a giant thunderclap at dawn”* in Sesotho

*The silliness and apparent lack of “scientificity” of the meaning, however, has little to do with the language. Dinosaurs have been given similarly silly names from the early days of scientific attention (and many less silly names for extinct animals are obscure through e.g. referring to the shape of a tooth). In contrast, many of Linnaeus names could draw directly on existing Latin names for at least the genus (as e.g. with “homo sapiens”—“homo” being the Latin word for human).

Going by Wikipedia, Sesotho has some five or six million native speakers—considerably less than Swedish today and an even smaller proportion of the world population than Sweden in the days of (my fellow Swede) Linnaeus*. If Linnaeus picked Latin over Swedish back then, how can we justify picking Sesotho over both Latin and English today? The idea is contrary to reason.

If someone were to argue that Latin and Greek, specifically, had grown impractical due to the reduced knowledge among today’s scientists, I might have some sympathies. However, if we concluded that they should go, the reasonable thing to do would be to opt for English as the sole language, thereby ensuring the largest global understandibility. If not English, then some other, truly major language, e.g. Mandarin*, Hindi*, or Spanish should have been considered. Sesotho is useless as single language, and not using a single language will end with names that appear entirely random. It will usually even be impossible to know what language a name is in, without additional research, making it that much harder to find out the meaning.

*Here additional thought might be needed on how the names should be written. (Original writing system? Transliterated to the Latin alphabet? Otherwise?)

For those interested in “local” names, there is always the possibility of introducing an everyday name for the local language: Dinosaurs have normally been known by their scientific names even in the general population, but there is no actual law that this must be the case. Call the Ledumahadi “Ledumahadi” in Sesotho and use a Latin or Greek translation* as the scientific name and the default in other languages.

*My limited knowledge does not allow me to make a suggestion.

Advertisements

Written by michaeleriksson

September 29, 2018 at 5:46 pm

Weak justifications for poor language

leave a comment »

When it comes to grammar, common arguments from the “everything goes” school include “there are countless examples of X being correct; ergo, X should always be allowed”, “X is not an error, because Shakespeare used it”, and other analogy claims.

Such arguments are usually faulty through lack of discrimination:* It is quite possible for a certain phrasing, grammatical construct, whatnot to be correct in one situation and incorrect in another—and the analogy must only be used as justification when the circumstances are sufficiently similar. An extreme example is “over-exaggerate”: There are situations in which “over-exaggerate” is a reasonable formulation, but it remains an error of the ignorant in almost all cases. Consider e.g. a politician deliberately exaggerating a problem in order to be more convincing—but doing so to such a degree that he loses believability. He has now over-exaggerated.**

*In the case of e.g. Shakespeare, they also forget that a once valid use might now be outdated; that he, as a poet, might have taken liberties in order to improve rhyme or meter; that his language might have contained dialectal features in a pre-standardization English; and similar.

**Whether such a use of “over-exaggerate[d]” has ever taken place is unknown to me; however, until five minutes before starting this text, I had not even contemplated the possibility that it could ever be anything but wrong—and the rarity of the correctness shows the danger of superficial analogy arguments that much better. (At “five minutes before”, I read the phrase “exaggerating too much” and saw the applicability to “over-exaggerate”.)

A more common example is the use of “and”, “or”, and similar conjunctions at the beginning of a sentence. There are cases where such use could be seen as correct. For instance, “Mary had a little lamb. And everywhere that Mary went, the lamb was sure to go.” would not bring me to the barricades.* I even occasionally use incorrect** such formulations my self, in a manner that I consider acceptable in context. Correspondingly, I cannot condemn a leading “and” in a blanket manner.

*However, I would have preferred “Mary had a little lamb; and everywhere that Mary went, the lamb was sure to go.”, because the full stop implies a strong separation that the “And” then reduces, as if someone was simultaneously pressing down on the gas pedal and braking. (Alternatively, I might have tried to cut the conjunction entirely.) Generally, I always remain a little skeptical: Even when the construct can be argued as grammatically acceptable, there are often reasons of style, logic, coherence, whatnot that speak against it.

**For instance, I might use a leading “And” within brackets in situations where I (a) want to strengthen the connection to the preceding text to overcome the bracket, (b) do not consider the bracketed content important enough for more words or even fear that more words might reduce legibility in context. (Of course, others might argue that if the text was that unimportant, it should have been cut entirely…) Similarly, my footnotes are almost always intended to be read in the immediate context of the main text, and will not always be complete sentences or thoughts without that context—some footnotes and brackets could be seen as a branch on a trunk and only make sense when the branch is entered from the trunk. (Why not forego the bracket + “And”, as another case of simultaneously hitting the gas pedal and braking? Well, the bracket is often beneficial to break out less important or less on-topic thoughts, as with the current. From the point of view of the main text, the bracket serves to separate such parts. However, sometimes the connection with the unbracketed text then becomes too weak from the point of view of the bracketed, and the “And” remedies this. This argument does not hold with Mary and her little lamb.)

However, most practical uses remain both incorrect and unacceptable, and those critical of these constructs do not typically suggest a blanket ban—only a ban of incorrect cases. For instance, where someone with an even semi-decent understanding of English would write “Mary had a little lamb and a goat.”, a journalist or a pre-schooler might write “Mary had a little lamb. And a goat.”, which is incorrect by any reasonable standard.* However, the problem does not reside with the “And”, but with the way a single sentence or thought has been artificially, confusingly,, and unnecessarily divided into two parts, one of which cannot stand on its own. The error is one of interpunctuation—not of what word is allowed where. “Mary went home. And took the lamb with her.”, makes the same mistake, if a bit more subtly. A faulty separation of a subordinate clause is a common variation, and often includes a far wider range of words. Consider e.g. “John went home. Because Mary was sick.”: Both parts contain a complete sentence and the situation might be salvaged by simply removing the “Because” (at the cost of no longer having the causal connection); however, a “because” clause can come both after and before its main clause, which can cause a lot of ambiguity. For instance, how do we know that the intention was not “John went home. Because Mary was sick, Tom also went home.”, with a part of the text missing?** What if the text, as actually given, had read “John went home. Because Mary was sick. Tom also went home.”? Was it John, Tom, or possibly both, who went home because of Mary’s health?

*Notably, the complete-sentence standard; however, see an excursion for an alternate suggestion and more detail.

**This gives another reason to stick to the rules: If a text contains language errors, it is often not clear why; and by deliberately deviating from correct grammar, the ability to detect accidental errors and to deduce the true intended meaning in face of errors is reduced. Equally, a deliberate deviation can make the reader assume an accidental error where none is present, leading to unnecessary speculation. Other examples that can soon become tricky include leaving out “unnecessary” uses of “that”, “unnecessary” commas, and similar. If in doubt, doing so can lead to their exclusion out of habit in a situation where they were definitely needed.

Someone criticizing such sentences usually does so, directly or indirectly, because of the division—of which “And” is just a result. Even if we were to say that sentences are allowed to start with “and”, “or”, whatnot, these sentences would still be wrong, because they still make an absurd and ungrammatical division. As an analogy, if someone has a viral infection accompanied by a fever, the infection does not go away because the patient’s body temperature is declared normal. More generally, we must not focus on superficial criteria, like a temperature or an optical impression of a sentence—we actually have to understand what goes on beneath the surface and we have to ask the right questions. Above, the right question is “Is the interpunctuation correct and reasonable?”—not whether a sentence starts with an “and”.

Excursion on my historical take on “and” et al. and on the reverse mistake:
I my younger days I belonged to the “never acceptable” school, largely through committing the opposite error of “sometimes wrong; ergo, always wrong”—something equally to be avoided. My opinions have become more nuanced over the years. However, I still feel that these constructs should be left to those with a developed understanding, because (a) by simply resolving to never start a sentence with “and” et al., a great number of other mistakes will be far less likely to occur (cf. above), (b) even most grammatically acceptable uses are better solved in other ways (cf. footnote above). I would also argue that a grammar which does categorically forbid these constructs would be perfectly valid and acceptable—it just happens that established English grammar does not. (In contrast, a grammar that allows e.g. “Mary had a little lamb. And a goat.”, while conceivable, would make a mockery of the concepts of full stop and sentence. The purpose of these are to give the reader information about the text not necessarily clear from the words themselves; and it would be a lesser evil to abolish* them entirely than to spread misinformation through them.)

*while interpunctuation is a wonderful thing writing systems tend to start without it uptothepointthatthereisnotevenwordseparation we do not need interpunctuation but do we really want to forego it fr tht mttr nt ll wrtng sstms s vwls still misleading information, is even worse

Excursion on complete sentences:
A typical criterion for the use of full stops is that all sentences are complete, typically containing at a minimum subject and verb. However, I would argue that it is more important to have a thought* of sufficient completeness** and sufficient context to understand that thought. For instance, this is the case when someone takes a fall and says “ouch”; a soldier shouts “incoming” or a surgeon says “scalpel”; a (compatible) question is answered with “yes”, “no”, “probably”, “the red one”, …; one opponent exclaims “son of a bitch” to the other; any number of imperatives are used (“buy me an ice cream ”, “assume that X”); etc. Indeed, a subject–verb criterion might not even make sense in all languages. Many Latin sentences, e.g., will only contain an implicit subject, implying that at least an explicit subject cannot be a universally reasonable criterion. (The English imperatives could also be seen as a case of an implicit subject.)

*I see myself supported by the more original and non-linguistic meanings of “sentence”, which are strongly overlapping with “thought”. Also cf. “sense” and “sentiment”.

**I deliberately avoided “complete thought”, which could imply that the entirety of a thought is expressed. This, in turn, is only rarely the case with a single sentence. (Cf. [1].)

However, these examples are only valid given the right context: Go up to a random person on the street and say “yes”, and chances are that he will be very confused.

“And a goat.” will usually fail this criterion, because it is so heavily tied* to something else that it cannot stand alone. Usually, this something is the preceding own statement (“Mary had a little lamb.”), and the best solution would be to integrate the two (“Mary had a little lamb and a goat.”) or to complete the missing portions (“Mary had a little lamb. And she had a goat.”). However, there are some cases that can be argued, mostly relating to immediate interactions (spoken word, texting, and similar). Consider e.g. “And a goat.” as an afterthought** to a previous complete thought or as an interjection by a second speaker—and compare it with “Oh, wait, it just occurred to me that I would also like to have a goat.” resp. “I agree with the previous speaker, but would like to add that we should also buy a goat.”, and similar overkill. In contrast, “Mary had a little lamb. And everywhere that Mary went, the lamb was sure to go.” has two separate and independent thoughts, both of which are complete subject–verb sentences, both of which could be taken as stand-alone claims with minimal context. (Except as far as the “And” sends a confusing signal and would be better removed in a stand-alone context; however, the result remains a perfectly valid sentence even in the traditional sense.)

*Interestingly, just “A goat.” is more likely to be a valid thought, because the “And” points to something else that must already have been communicated.

**With sufficient delay that the afterthought cannot be integrated into the whole: If someone is currently writing an essay and sees the sudden need to add a goat to the discussion, there is no justification for “And a goat.”—there is more than enough time to amend the text before publication.

However, in most cases, I would recommend sticking to the traditional “complete sentence” criterion, because it makes a useful proxy and can serve to avoid sloppy mistakes when trying to be clever.

Excursion on full-stops for effect:
Full-stops are often deliberately (mis-)used for e.g. dramatic effect or to imitate the spoken word. For instance, “Mary had a little lamb. And a goat.” might arise in an attempt to put extra emphasis on the latter, to simulate a “dramatic pause”, or similar. I recognize that there is some benefit to this effect—but not to how it is achieved. I strongly recommend using the “m-dash” (“—-”) for such purposes—and do so myself all the time.* To boot, I would strongly advice against striving for a literal pause, seeing that the written and spoken word are not identical in their character. Notably, most proficient readers do not “sound out” the words in such a manner that an intended pause would actually occur.

*To the point that even I cannot deny overuse… Then again, I do not suggest that others change the frequency of their use of the effect, just that they replace one means of achieving it with another. Some might raise objections against this use of the m-dash, e.g. based on historical use for parenthesis; however, I do not use the old semantics, there are other means to achieve a parenthesis effect, and the m-dash is otherwise fairly rare in modern English.

A particularly idiotic use is the insertion of a full-stop after every word, to indicate that each words is heavily emphasized and separated in time, e.g. “Do. Not. Do. This.”: The only situation where this might even be negotiable is when spoken word is to be (pseudo-)transcribed, e.g. as part of a dialogue sequence in a book. For a regular text, including e.g. a post on a blog or in a forum, textual means of emphasis should be used (italicization, underlining, bold type, …)—the written word is not a mere transcription of the spoken.

Excursion on full-stops in long sentences:
I sometimes have the impression that an artificial full-stop has been inserted to prevent a sentence from being too long, by some standard. (Possibly, some journalists write a correct sentence, see it marked as “too long” by a style checker, and just convert a comma to a full-stop to land below the limit. Then again, some journalists appear to use a full-stop as the sole means of interpunctuation, even when length is not a concern…) The result is a completely unnecessary hindrance of the reader: Because valuable hints are now absent or, worse, misleading, it becomes harder to read the sentence. (Note that there is no offsetting help, because the actual thought expressed does not magically become shorter when a few full-stops are inserted.) For instance, when reading the FAZ (roughly, the German equivalent of the New-York Times), I have often encountered a complete sentence of a dozen or more words, followed by “Because”/“Weil” at the beginning of a subordinate clause of another dozen words—and then a full-stop… The result is that I, under the assumption that the grammar is correct, “close” the first sentence, absorb the second with the expectation of applying the causality to a later main clause, and am then thrown entirely off track. I now have to go back to the first sentence, (at least partially) re-read it, make the causal connection, re-think the situation, and then scan forwards to the end of the subordinate clause again, to continue reading. It would have been much, much better to keep the subordinate clause joined by the grammatically correct comma—the more so, the longer the sentences.

Meta-information:
My use of full-stops and capital letters in the above examples is deliberately inconsistent. Mostly, I have tried to avoid them in order to not complicate matters around the resulting double* interpunctuation. However, many examples have required them to be understandable. When it comes to standalone “And” vs “and” (quotation marks included), I have used “And” when it appeared thus in the example, and “and” when speaking of the word more generically.

*Examples like ‘abc “efg.”, hij’ are awkward and can be hard to read. I also categorically reject some outdated rules around interpunctuation and quotes that originated to solve pragmatical problems with equally outdated printing technology.

I found the asymmetry of “Mary had a little lamb and a goat.” a little annoying, and considered adding a “g-word” before “goat”; however, a reasonable “g-word” was hard to find* and some of the later stand-alone examples became awkward.

*The most orthographically and semantically obvious example is “giant”, but it is typically pronounced differently. Other candidates made too little sense.

Written by michaeleriksson

September 21, 2018 at 12:11 am

X began Y-ing

leave a comment »

Disclaimer: I set out to write a text just two or three paragraphs long. I was soon met with a series of grammatical complications and aspects that I had hitherto not considered—and I raise the warning that there could be others that I still have not discovered. However, my main objection is one of style—not grammar. (No matter what impression the text could give: It only takes so long to say “it is ugly”.)

I currently spend more time than usual reading fiction. This leads me to again, and again, and again encounter one of the most ugly formulation patterns in the English language: X began Y-ing.

He began running. She started turning. It commenced raining. Etc.

Not only are they very ugly, they are also potentially misleading, because a Y-ing construct* usually has the implication of something (already) ongoing, as with “John, running, began to tire” or “John began to tire [while] running”**. This is particularly bad with “started”, because “she started turning” could be read as “she experienced a start while turning”. The much sounder construct is “to” and an infinitive—“John began to run” over “John began running”. Indeed, I often find myself suppressing a snarky question of “What did X begin to do, while Y-ing?”, even knowing what was meant. In some cases and contexts, other formulations might be suitable, e.g. “John began with running [to lose weight]” or “John began his running [for the day]”. An entirely different road is also possible, e.g. “John broke into a run”, or “John took up running” (as a smoother alternative for the weight-loser above).

*The main cases usually are participles (or, in a noun context, gerunds). I am uncertain how “Y-ing” in “X began Y-ing” should be classified, especially since it logically fills the role of an infinitive. Conceivably, it is a gerund (cf. an excursion on “stopped” below), which would give it some grammatical justification, but would not reduce its ugliness or potential ambiguity. The matter is complicated by e.g. “John began running slowly”, which would point to a participle, not a gerund. (It might be explained as intending “John slowly began running”, but that would change the meaning.) To boot, the same string of characters can sometimes be interpreted in different roles and meanings in a given sentence—and the gerund–participle division seems very vulnerable to this (but I will ignore such complications in the rest of the text).

**This example is equally ugly and not something that I would recommend (at least not without the “while”). The purpose of the examples is solely to illustrate the potential confusion.

Moreover, even a construct using “began” is often just a waste of space—a simple “John ran” will often do the trick. That he began to do so will often be clear from context, redundant, or simply not interesting in the overall situation. Consider e.g. “John walked along the path.* A bear burst out of the woods and John ran.”: The use of “began to run” (or “began running”) adds nothing but length to the text.

*This sentence makes the issue crystal clear. However, it is not always necessary, because (a) John is more likely to have walked than to have run, and (b) what he did before the encounter with the bear is usually of secondary importance to a work of fiction (but the increased precision might be beneficial in non-fiction). In a pinch, that John was already running could be brought over by “John ran faster”. In other cases, a “began to Y”/“began Y-ing” brings no value at all, as with “John jumped into the water and began to swim”—he was hardly swimming before, so “[…] and swam” is better. The variation “John jumped into the water and began to drown” / “[…] drowned” only sees a significant difference when the event/action/whatnot was not completed, here e.g. because John was rescued. Often the action is so short that its commencement will almost always imply its conclusion—using “she started to turn” over “she turned” is hardly ever justifiable.

My advice: The first attempt should use a single, ordinary verb, e.g. “John ran”. If this does not work in the overall context, go with “began to”, e.g. “John began to run”. Never use “began Y-ing”.

Excursion on “stopped” and similar words:
What about the mirror image “John stopped running”? I consider this formulation more acceptable, but also suboptimal, and would not see it as a justification for e.g. “X started* Y-ing”. This case differs in several regards: Firstly, the absence of strong alternatives. (There is no mirror image to “John ran”**, and “John stopped to run” is both uglier and more ambiguous than “John stopped running”.) Secondly, the lesser ambiguity. Thirdly, being less ugly in my eyes. Fourthly, having a greater grammatical justification, seeing that an interpretation as something ongoing is reasonably compatible (unlike with “start”): “John stopped running” could, if somewhat generously, be seen as “John, currently running, stopped doing so”. (Contrast this with a hypothetical and paradoxical “John, currently running, started doing so”…) Alternatively, an interpretation as gerund is less awkward than above, e.g. as “John stopped [the activity of] running”.***

*For better symmetry with “stop”, I will use “start” in this excursion. The main text mostly uses “began”, because I have seen “began” much more often in the last few days (and likely generally).

**“John stopped” would be a possible solution when only one activity is ongoing, and especially for activities that imply a movement in space (e.g. “running”). However, this will not work generally: For instance, “John sang while walking down the road. Feeling a sneeze coming on, he stopped.” is not unique enough: Did he stop singing, walking, or both? (Note that this ambiguity is more likely to affect the story than whether John ran, walked, or rested before meeting the bear above.)

***Then again, this might be better saved for more ongoing activities, states, whatnot. I would find this formulation less natural with someone who is at this very moment running, and more natural with someone who runs from time to time for exercise. Similarly, “John stopped smoking” would normally imply that he gave up smoking, rather than that he extinguished a cigarette. The same applies to the use of a gerund with “start” (“John started running to lose weight”—not “John started running to escape the bear”). In both cases, a reformulation using “gave up” resp. “took up”, or similar, is beneficial both to reduce ambiguity and to reduce ugliness. (Note that “John took up running” definitely implies a gerund. Also note that “John took up sports” works better than “John began sports”.)

A way out is to avoid “stopped” in favour of e.g. “ceased”: “X ceased to Y” is less problematic than “X stopped to Y”. For the moment, I suggest to either use this way or, when the context allows it, just “X stopped”—never “X stopped Y-ing”.

Constructs like “John continued running” are somewhere between the “start” and “stop” cases: On the one hand, the “ongoing” semi-justification holds similarly to “stop”; on the other, there are alternatives similarly to “start” (“John continued to run” and “John ran”, the latter actually being stronger than for “start”). These alternatives are my recommendation.

A “John continued running” might have some justification with a different intention, as with “John [who was originally walking] continued [now] running [because he saw a bear]”, but here a formulation like “John continued at a run” is usually better.

Excursion on “to … to …”:
A minor potential ugliness when using “to” is variations of “John wanted to begin to run”, where a “to” + infinitive appears repeatedly. The temptation to use “John wanted to begin running” is understandable, but I would recommend a greater restructuring. In the given example, the best solution is usually to just drop “to begin” entirely—“John wanted to run”. Alternatively, something like “John wanted to take up running” works again.

Excursion on other verbs:
My draft contained the following as a backup argument:

Of course, other non-auxiliary* double-verb constructs usually** follow the “to” pattern: “John wanted to run”, not “John wanted running”—conjugated verb, “to”, infinitive verb.

*An auxiliary verb could indeed use Y-ing as a participle, e.g. “John is running”—or use some other variation, e.g. “John must run” (an infinitive without “to”). Generally, some caution must be raised due to the different roles of verbs, which could imply different grammatical rules.

**A potential group of exceptions is those like “stop”, cf. excursion. While no other group of exceptional verbs occur to me, they might exist.

During proof-reading, exceptions like “loved running”, “disliked running”, “ran celebrating”, and my own uses of “took up running” belatedly occurred to me. These make the issue of precedence trickier, and I would rather not do the leg-work on the issue. However, limited to these cases:

“Took up running” is a strict gerund phrase, to the point that it can be disputed whether it is even a double-verb construct. (“Took up sports”, again, works much better than “began sports”, pointing strongly to a verb–noun construct. A gerund is, obviously, a quasi-noun. “Took up to run” is not even a possibility. ) Due to its character, there is also much less room for ambiguity.

“Ran celebrating” serves more to exemplify my objections against “began running” than to conquer them: Here two activities take place simultaneously (running and celebrating) that are not that closely connected. Someone is in a state of celebrating (e.g. having just won a track race) and is running while being in this state (e.g. during a lap of honor). Prior to winning, he was running without celebrating; after the honor lap, he will not be running but still be celebrating. Indeed, “he began, celebrating, to run” shows how awkward a formulation lie “he began celebrating” is. Even when the connection is strong, the modification by the one verb (a participle) is not necessarily on the other verb, but more (or wholly) on the actor in all cases that I can think of at the moment, e.g. “he slept dreaming” (broadly equivalent to “he slept and was dreaming”; and as opposed to “he slept dreamingly”, broadly equivalent to “he slept and did this dreamingly”).

As for “loved running” (ditto “disliked running”), it is usually solidly in the gerund territory and refers to more general activities than e.g. “John began running” typically does, e.g. “John loved running as a means of exercise”. In contrast, even if we allowed “John loved running from the bear” (referring to that one situation), it would make John a bit of a freak—and it could easily be replaced by “John loved to run from the bear”. Then again, I am skeptical to allowing “John loved running from the bear” in the first place: While it is not as ugly and ambiguous* as “John started running”, the gerund** issue arises and the construct brings no additional value over “John loved to run from the bear”.

*But it has some ambiguity: John might e.g. have been filled with love for his wife while running.

**Replacing “running” with “sports” gives us the non-sensical “John loved sports from the bear” speaking against a gerund, while variations like “John loved running speedily from the bear” point to a participle. Can the use be justified if it is not a gerund? Would it not be better to consistently use a “to” + infinitive?

Written by michaeleriksson

September 7, 2018 at 4:36 am

Posted in Uncategorized

Tagged with , , , ,

A call for more (!) discrimination

with 3 comments

The word “discrimination” and its variations (e.g. “to discriminate”) is hopelessly abused and misunderstood in today’s world. Indeed, while properly referring to something (potentially) highly beneficial and much needed, it has come to be a mere short for longer phrases like “sexual discrimination” and “racial discrimination”.* Worse, even these uses are often highly misleading.** Worse yet, the word has increasingly even lost the connection with these special cases, degenerating into a mere ad hominem credibility killer or a blanket term for any unpopular behavior related (or perceived as related) to e.g. race.***

*Note that it is “sexual” and “racial”—not “sexist” and “racist”. The latter two involve ascribing an intention and mentality to someone else, beyond (in almost all cases) what can possibly be known—and is sometimes manifestly false. Further, their focus on the intent rather than the criteria would often make them unsuitable even in the rare cases where the use could otherwise be justified.

**E.g because a discrimination on a contextually rational and reasonable criterion (e.g. GPA for college admissions) indirectly results in differences in group outcome, which are then incorrectly ascribed to e.g. “racial discrimination”. The latter, however, requires that race was the direct criterion for discrimination.

***Including e.g. having non-PC opinions about some group or expressing that opinion, neither of which can in any meaningful sense be considered discrimination—even in cases where the opinion or expression is worthy of disapproval. This including even the (already fundamentally flawed) concept of micro-aggressions.

What then is discrimination? Roughly speaking: The ability to recognize the differences between whatever individuals/objects/phenomena/… are being considered, to recognize the expected effects of decisions involving them, and to act accordingly. Indeed, if I were to restrict the meaning further, it is the “act” part that I would remove…* (Also see a below excursion on the Wiktionary definitions.)

*E.g. in that I would not necessarily consider someone discriminating who flipped a coin and then hired exclusively men or exclusively women based on the outcome—apart from the greater group impact, this is not much different from the entirely undiscriminating hiring by a coin flip per candidate. I might possibly even exclude e.g. the feminist stereotype of a White Old Man who deliberately hires only men because of the perceived inferiority of women: This is, at best, poor discrimination on one level and a proof of a lack of discrimination on another. C.f. below. (While at the same time being a feminist’s prime example of “discrimination” in the distorted sense.)

For instance, deciding to hire or not to hire someone as a physician based on education and whether a license to practice medicine is present, is discrimination. So is requiring a lawyer to have passed a bar exam in order to perform certain tasks. So is requiring a fire fighter to pass certain physical tests. So is using easier tests for women* than for men. So is using health-related criteria to choose between food stuffs. So is buying one horse over another based on quality of teeth or one car over another based on less rust damage. Etc. Even being able to tell the difference between different types of discrimination based on justification and effects could be seen as discrimination!

*This is, specifically, sexual discrimination, which shows that even such special cases can have the blessing of the PC crowd. It also provides an example of why it is wrong to equate “sexual” and “sexist”, because, no matter how misguided this discrimination is, it is unlikely to be rooted in more than a greater wish for equality of outcome. To boot, it is an example of poor discrimination through focus on the wrong criteria or having the wrong priorities. (Is equality of outcome when hiring really more important than the lives of fire victims?!?)

Why do we need it?

Discrimination is very closely related to the ability to make good decisions (arguably, any decision short of flipping a coin)—and the better someone is at discriminating, the better the outcomes tend to be. Note that this is by no means restricted to such obvious cases as hiring decisions based on education. It also involves e.g. seeing small-but-critical differences in cases where an argument, analogy, or whatnot does or does not apply; or being able to tell what criteria are actually relevant to understanding the matter/making the decision at hand.*

*Consider e.g. parts of the discussion in the text that prompted this one; for instance, where to draw the line between speech and action, or the difference between the IOC’s sponsor bans and bans on kneeling football players. Or consider why my statements there about employer’s rights do not, or only partially, extend to colleges: Without a lack of understanding, someone might see the situations as analogous, based e.g. on “it’s their building” or “it’s their organization”. Using other factors, the situation changes radically, e.g. in that the employer pays the employee while the college is paid by the student; that co-workers who do not get along can threaten company profits, while this is only rarely the case with students who do not get along; and that a larger part of the “college experience” overlaps with the students personal life than is, m.m., the case for an employee—especially within the U.S. campus system. (For instance, what characteristic of a college would give it greater rights to restrict free speech in a dorm than a regular landlord in an apartment building? A lecture hall, possibly—a dorm, no.)

Indeed, very many of today’s societal problems and poor political decisions go back, at least partially, to a failure to discriminate resp. to discriminate based on appropriate criteria.

Consider e.g. the common tendency to consider everything relating to “nuclear” or “radioactive” to be automatically evil (or the greater evil): Nuclear power is “evil”, yet fossil energies do far more damage to the world. The nuclear bombings of Japan were “evil”, yet their conventional counter-part killed more people. Radioactive sterilization of food is “evil”, yet science considers it safe—much unlike food poisoning… What if discrimination was not done by name or underlying technology, but rather based on the effects, risks, opportunities?

Consider the (ignorant or deliberate) failure to discriminate between e.g. anti-Islamists and anti-Muslims or immigration critics and xenophobes, treating them the same and severely hindering a civilized public debate.

Consider the failure to discriminate between school children by ability and the enforcing of a “one size fits all” system that barely even fits the average*, and leaves the weakest and strongest as misfits—and which tries to force everyone to at a minimum go through high school (or its local equivalent). (Germany still does a reasonable job, but chances are that this will not last; Sweden was an absolute horror already when I went to school; and the U.S. is a lot worse than it should and could be.)

*Or worse, is so centered on the weakest that school turns into a problem even for the average… Indeed, some claim that e.g. the U.S. “No Child Left Behind Act” has done more harm than good for this very reason.

Consider the failure to discriminate between politicians based on their expected long-term effect on society, rather than the short-term effect on one-self.

Consider the failure to discriminate between mere effort and actual result, especially with regard to political decisions. (Especially in the light of the many politicians who do not merely appear to fail at own discrimination, but actually try to fool the voters through showing that “something is being done”—even should that something be both ineffective and inefficient.)

Consider the failure to discriminate between those who can think for themselves (and rationally, critically, whatnot) and those who can not when it comes to e.g. regulations, the right to vote, self-determination, …

Consider the failure to discriminate between use and abuse, e.g. of alcohol or various performance enhancing drugs. (Or between performance enhancing drugs that are more and less dangerous…)

Consider the undue discrimination between sex crimes (or sexcrimes…) and regular crimes, especially regarding restrictions on due process or reversal of reasonable expectations. (Whether sex is involved is not a valid criterion, seeing that e.g. due process is undermined as soon as any crime is exempt from it.)

Consider the undue discrimination between Israelis and Palestinians by many Westerners, where the one is held to a “Western” standard of behavior and the other is not. (Nationality is not relevant to this aspect of the conflict.)

A particularly interesting example is the classification of people not yet 18 as “children”*, which effectively puts e.g. those aged 3, 10, and 17 on the same level—an often absurd lack of discrimination, considering the enormous differences (be they physical, mental, in terms of experience or world-view, …) between people of these respective ages. Nowhere is this absurdity larger than in that the “child” turns into an “adult” merely through the arrival of a certain date, while being virtually identically the same as the day before—and this accompanied with blanket rights and obligations, with no other test of suitability. Note how this applies equally to someone well-adjusted, intelligent, and precocious as it does to someone who is intellectually over-challenged even by high school and who prefers to lead a life of petty crimes and drug abuse. (To boot, this rapid change of status is highly likely to make the “children” less prepared for adulthood, worsening the situation further.)

*The size of the problem can vary from country to country, however. In e.g. the U.S. there is a fair chance that a word like “minor” will be used, especially in more formal contexts, which at least reduces the mental misassociations; in Sweden, “barn” (“child”) dominates in virtually all contexts, including (at least newer) laws.

However, there are many other problems relating to the grouping of “children” with children, especially concerning undifferentiated societal and political debates around behavior from and towards these “children”. This in particular in the area of sex, where it is not just common to use terms like “pedophile”* and “child-porn” for the entire age-range, but where I have actually repeatedly seen the claim that those sexually attracted to someone even just shy of 18 would be perverts**—despite the age limit being largely arbitrary***, despite that many are at or close to their life-time peak in attractiveness at that age, despite that most of that age are fully sexually mature, and despite that people have married and had children at considerably lower ages for large stretches of human history.

*This word strictly speaking refers to someone interested in pre-pubescent children, making it an abuse of language not covered by the (disputable) justification that can be given to “child-porn” through the wide definition of “child”. Even if the use was semantically sound, however, the extremely different implications would remain, when children and “children” at various ages are considered.

**Presumably, because the classification of someone younger as a “child” has become so ingrained with some weak thinkers that they actually see 18 as a magic limit transcending mere laws, mere biological development, mere maturity (or lack there of), and leaving those aged 17 with more in common with those aged 8 than those aged 18.

***Indeed, the “age of consent” is strictly speaking separate from the “age of maturity”, with e.g. Sweden (15) and Germany (14 or 16, depending on circumstances) having a considerably lower age of consent while keeping the age of maturity at 18.

Not all discrimination, depending on exact meaning implied, is good, but this is usually due to a lack of discrimination. Consider e.g. making a hiring decision between a Jewish high-school drop-out and a Black Ph.D. holder: With only that information present, the hiring decision can be based on either the educational aspect, the race/ethnicity aspect, or a random choice.* If we go by the educational or race aspect, there is discrimination towards the candidates. However, if the race aspect is used, then this is a sign that there has been too little or incorrect discrimination towards the hiring criteria—otherwise the unsuitability of the race aspect as a criterion would have been recognized. This, in turn, is the reason why racial discrimination is almost always wrong: It discriminates by an unsound criterion. We can also clearly see why “discrimination” must not be reduced to the meanings implied by “racial [and whatnot] discrimination”—indeed, someone truly discriminating (adjective) would not have been discriminating (verb) based on race in the above example.

*Or a combination thereof, which I will ignore: Including the combinations has no further illustrative value.

Excursion on proxy criteria:
Making decisions virtually always involves some degree of proxy criteria, because it is impossible to judge e.g. how well an applicant for a job fairs on the true criteria. For instance, the true criterion might amount to “Who gives us the best value for our money?”. This, however, is impossible to know in advance, and the prospective employer resorts to proxy criteria like prior experience, references, education, … that are likely to give a decent, if far from perfect, idea of what to expect. (Indeed, even these criteria are arguably proxies-for-proxies like intelligence, industriousness, conscientiousness, …—and, obviously, the ability to discriminate!)

Unfortunately, sometimes proxies are used that are less likely to give valuable information (e.g. impression from an interview) and/or are “a proxy too far” (e.g. race). To look at the latter, a potential U.S. employer might (correctly) observe that Jews currently tend to have higher grades than Blacks and tend to advance higher in the educational system, and conclude that the Jew is the better choice. However, seeing that this is a group characteristic, it would be much better to look at the actual individual data, removing a spurious proxy: Which of the two candidates does have the better grades and the more advanced education—not who might be expected to do so based on population statistics.

As an aside, one of my main beefs with increasing the number of college graduates (even at the cost of lowering academic standards to let the unsuitable graduate) is that the main role of a diploma was to serve as a proxy for e.g. intelligence and diligence, and that this proxy function is increasingly destroyed. Similarly, the greater infantilization of college students removes the proxy effect for ability to work and think for oneself.

Excursion on discrimination and double standards:
Interestingly, discrimination otherwise rejected, usually relating to the passage of time, is sometimes arbitrarily considered perfectly acceptable and normal. A good example is the age of maturity and variations of “age of X” (cf. above)—a certain age is used as an extremely poor and arbitrary proxy for a set of personal characteristics.

In other cases, such discrimination might have a sufficient contextual justification that it is tolerated or even considered positive. For instance, even a well qualified locker-room attendant of the wrong sex might not be acceptable to the visitors of a public bath, and the bath might then use sex as a hiring criterion. Not allowing men to compete in e.g. the WTA or WNBA can be necessary to give women a reasonable chance at sports success (and excluding women from the ATP or the NBA would then be fair from a symmetry point of view). Etc.

Then there is affirmative action…

Excursion on how to discriminate better:
A few general tips on how to discriminate better: Question whether a criterion is actually relevant, in it self, or is just as proxy, proxy-for-a-proxy, proxy-for-a-proxy-for-a-proxy, …; and try to find a more immediate criterion. Question the effectiveness of criteria (even immediate ones). Do not just look at what two things have in common (e.g. building ownership, cf. above) but what makes them different (e.g. being paid or paying). Try to understand the why and the details of something and question whether your current assumptions on the issue are actually correct—why is X done this way*, why is Y a criterion, why is Z treated differently, … Try to look at issues with reason and impartiality, not emotion or personal sympathy/antipathy; especially, when the issues are personal, involve loved ones or archenemies, concern “pet peeves”, or otherwise are likely to cause a biased reaction.

*The results can be surprising. There is a famous anecdote about a member of the younger generation who wanted to find out why the family recipe for a pot-roast (?) called for cutting off part of it in a certain manner. Many questions later, someone a few generations older, and the origin of the tradition, revealed the truth: She had always done so in order to … make the pot-roast fit into her too small pan. Everyone else did so in the erroneous belief that there was some more significant purpose behind it—even when their pans were larger.

Excursion on when not to discriminate (at all):
There might be instances where potential discrimination, even when based on superficially reasonable grounds, is better not done.

For instance, topics like free speech, especially in a U.S. campus setting, especially with an eye on PC/Leftist/whatnot censorship, feature heavily in my current thoughts and readings. Here we can see an interesting application of discrimination: Some PC/Leftist/whatnot groups selectively (try to) disallow free speech when opinions contrary to theirs are concerned. Now, if someone is convinced that he is right, is that not a reasonable type of discrimination (from his point of view)?

If the goal is to push one’s own opinion through at all cost, then, yes, it is.

Is that enough justification? Only to those who are not just dead certain and lacking in respect for others, but who also are very short-sighted:

Firstly, as I often note, there is always a chance that even those convinced beyond the shadow of a doubt are wrong. (Indeed, those dead certain often turn out to be dead wrong, while those who turn out to be right often were open to doubts.) What if someone silences the opposition, forces public policy to follow a particular scheme without debate, indoctrinates future generations in a one-sided manner, …—and then turns out to be wrong? What if the wrongness is only discovered with a great delay, or not at all, due to the free-speech restrictions? Better then to allow other opinions to be uttered.

Secondly, if the power situation changes, those once censoring can suddenly find themselves censored—especially, when they have themselves established censorship as the state of normality. Better then to have a societal standard that those in power do not censor those out of power.

Thirdly, there is a dangerous overlap between the Niemöller issue and the fellow-traveler fallacy: What if the fellow travelers who jointly condemn their common enemies today, condemn each other tomorrow? (Similarly, it is not at all uncommon for a previously respected member of e.g. the feminist community to be immediately cast out upon saying something “heretic”.) Better then to speak up in defense of the censored now, before it is too late.

Fourthly, exposure to other opinions, dialectic, eclecticism, synthesis, … can all be beneficial for the individual—and almost a necessity when we look at e.g. society as a whole, science, philosophy, … Better then to not forego these benefits.

Fifthly, and possibly most importantly, censorship is not just an infringement of rights against the censored speaker—it is also an infringement of rights against the listeners. If we were (I do not!) to consider the act against the speaker justified (e.g. because he is “evil”, “racist”, “sexist”, or just plainly “wrong”); by what reasoning can this be extended to the listeners? Short of “it’s for their own good” (or, just possibly, “it’s for the greater good”), I can see nothing. We would then rob others of their right to form their own opinions, to expose themselves to new ideas, whatnot, in the name of “their own good”—truly abhorrent. Better then to allow everyone the right to choose freely, both in terms of whom to listen to and what to do with what is heard.

Excursion on failure to discriminate in terminology:
As with the child vs. “child” issue above, there are many problems with (lack of) discrimination that can arise through use of inappropriate words or inconsistent use of words. A very good example is the deliberate abuse of the word “rape” to imply a very loosely and widely defined group of acts, in order to ensure that “statistics” show a great prevalence, combined with a (stated or implied) more stringent use when these “statistics” are presented as an argument for e.g. policy change. Since there is too little discrimination between rape and “rape”, these statistics are grossly misleading. Other examples include not discriminating between the words* “racial” and “racist”, “[anabolic] steroid” and “PED”, “convicted” and “guilty”, …

*Or the concepts: I am uncertain to what degree the common abuse of “racist” for “racial” is based on ignorance of language or genuine confusion about the corresponding concepts. (Or intellectually dishonest rhetoric by those who do know the difference…) Similar remarks can apply elsewhere.

(In a bigger picture, similar problems include e.g. euphemistic self-labeling, as with e.g. “pro-life” and “pro-choice”; derogatory enemy-labeling, e.g. “moonbat” and “wingnut”; and emotionally manipulative labels on others, e.g. the absurd rhetorical misnomer “dreamer” for some illegal aliens. Such cases are usually at most indirectly related to discrimination, however.)

Excursion on Wikipedia and Wiktionary:
Wikipedia, often corrupted by PC editors [1], predictably focuses solely on the misleading special-case meanings in the allegedly main Wikipedia article on discrimination, leaving appropriate use only to alleged special cases… A particular perversity is a separate article on Discrimination in bar exam, which largely ignores the deliberate discriminatory attempt to filter out those unsuited for the bar and focuses on alleged discrimination of Blacks and other ethnicities. Not only does this article obviously fall into the trap of seeing a difference in outcome (on the exam) as proof of differences in opportunity; it also fails to consider that Whites are typically filtered more strongly before* they encounter the bar exam, e.g. through admittance criteria to college often being tougher.**

*Implying that the exam results of e.g. Blacks and Whites are not comparable. As an illustration: Take two parallel school-classes and the task to find all students taller than 6′. The one teacher just sends all the students directly to the official measurement, the other grabs a ruler and only sends those appearing to be taller than 5′ 10”. Of course, a greater proportion of the already filtered students will exceed the 6′ filtering… However, this is proof neither that the members of their class would be taller (in general), nor that the test would favor their class over the other.

**Incidentally, a type of racial discrimination gone wrong: By weakening criteria like SAT success in favor of race, the standard of the student body is lowered without necessarily helping those it intends to help. (According to some, e.g. [2] and [3], with very different perspectives and with a long time between them.) To boot, this type of discrimination appears to hit another minority group, the East-Asians, quite hard. (They do better on the objective criteria than Whites; hence, they, not Whites, are the greater victims.)

Worse, one of its main sources (and the one source that I checked) is an opinion piece from a magazine (i.e. a source acceptable for a blog, but not for an encyclopedia), which is cited in a misleading manner:* Skimming through the opinion piece, the main theses appear to be (a) that the bar exam protects the “insiders” from competition by “outsiders” by ensuring a high entry barrier**, (b) that this strikes the poor*** unduly, and (c) that the bar exam should be abolished.

*Poor use of sources is another thing I criticized in [1].

**This is Economics 101. The only debatable point is whether the advantages offset the disadvantages for society as a whole.

***Indeed, references to minorities appear to merely consider them a special case of the “poor”, quite unlike the Wikipedia article. To boot, from context and time, I suspect that the “minorities” might have been e.g. the Irish rather than the Blacks or the Hispanics…

Wiktionary does a far better job. To quote and briefly discuss the given meanings:

  1. Discernment, the act of discriminating, discerning, distinguishing, noting or perceiving differences between things, with intent to understand rightly and make correct decisions.

    Rightfully placed at the beginning: This is the core meaning, the reason why discrimination is a good thing and something to strive for, and what we should strive to preserve when we use the word.

  2. The act of recognizing the ‘good’ and ‘bad’ in situations and choosing good.

    Mostly a subset of the above meaning, with reservations for the exact meaning of “good”. (But I note that even a moral “good” could be seen as included above.)

  3. The setting apart of a person or group of people in a negative way, as in being discriminated against.

    Here we have something that could be interpreted in the abused sense; however, it too could be seen as a subset of the first item, with some reservation for the formulation “negative way”. Note that e.g. failing to hire someone without a license to practice medicine for a job as a practicing physician would be a good example of the meaning (and would be well within the first item).

  4. (sometimes discrimination against) Distinct treatment of an individual or group to their disadvantage; treatment or consideration based on class or category rather than individual merit; partiality; prejudice; bigotry.

    sexual or racial discrimination

    Only here do we have the abused meaning—and here we see the central flaw: The example provided (“sexual or racial discrimination”) only carries the given meaning (in as far as exceeding the previous item) when combined with a qualifier; dropping such qualifiers leads to the abuse. “Sexual discrimination”, “racial discrimination”, etc., carry such meanings—just “discrimination” does not. This makes it vital never to drop these qualifiers.

    Similarly, not all ships are space ships or steam ships, the existence of the terms “space ship” and “steam ship” notwithstanding; not all forest are rain forests; sea lions are not lions at all and sea monkeys are not even vertebrates; …

    Note that some of the listed meanings only apply when viewed in the overall context of the entire sentence. Bigotry, e.g., can be a cause of discrimination by an irrelevant criterion; however, “sexual discrimination”, etc., is not it self bigotry. Prejudice* can contain sexual discrimination but is in turn a much wider concept.

    *“Prejudice” is also often misunderstood in a potentially harmful manner: A prejudice is not defined by being incorrect—but by being made in advance and without knowing all the relevant facts. For example, it is prejudice to hear that someone plays in the NBA and assume, without further investigation, that he is tall—more often than not (in this case), it is also true.

  5. The quality of being discriminating, acute discernment, specifically in a learning situation; as to show great discrimination in the choice of means.

    Here we return to a broadly correct use, compatible with the first item, but in a different grammatical role (for want of a better formulation).

    I admit to having some doubts as to whether the implied grammatical role makes sense. Can the quality of being discriminating be referred to as “discrimination”? (As opposed to e.g. “showing discrimination”.) Vice versa for the second half.

  6. That which discriminates; mark of distinction, a characteristic.

    The same, but without reservations for grammatical role.

Written by michaeleriksson

August 9, 2018 at 2:08 am

Negative language changes in Sweden / “nyanlända”

with one comment

Deliberate attempts to change language are often a bad idea (cf. e.g. [1], [2], [3]); especially, when driven by a wish to influence thinking, avoid “offensive” terms, or similar.

A particular annoyance in this regard is the Swedish word “nyanländ”*: This word has long been used in everyday language in a wide variety of contexts, including “nyanländ immigrant” (“newly arrived immigrant”), “nyanlända turister” (“newly arrived tourists”), “nyanlänt flygplan” (“newly arrived airplane”), “de nyanlända”** (“the new arrivals”) …

*Literally: “newly arrived”. A more idiomatic English translation might often use “just arrived” or “new arrival”. In the context of immigration, until recently, “fresh off the boat” would have covered much of the same intents. (But differing in that “fresh off the boat” often has derogatory implications of e.g. making humorous mistakes regarding the local language or customs.)

**The use as a noun resp. an adjective with an implicit noun is common with the old meaning; with the new meaning, it appears to dominate the use as pure adjective.

At some point in the last few years, these uses appear to have been suddenly deprecated in favour of a sole technical meaning referring specifically to some groups of foreigners within Sweden.* This, in and by it self, is so obviously wrong that it beggars belief. However, there is also the significant problem of meaning:

*I am unclear on the motivation, but the overwhelmingly likely candidates fall within the “control thinking”, “euphemistic use”, whatnot family. Note that issues around immigration and refugees are in great discussion in Sweden, and that there is often a great divide between the (real or ostensible) opinions of the politicians (greatly in favor) and the populace (increasingly negative); see also an excursion at the end. A specific conceivable intention could be to communicate the message that “they are Swedes too; just newly arrived”—as opposed to e.g. “they are refugees, who just happen to live in Sweden right now”. (The only other candidate that has even a remote plausibility to me, is the wish to make a more fine-grained categorization, possibly relating to a law mentioned below, in combination with extreme lack of judgment as to what words might be suitable.)

It is not obvious what groups are intended. Indeed, until today, when I read up a little, I had assumed that the intent was to find a common term for the sum of all newly* arrived non-tourist/non-visitor foreigners, including regular immigrants, refugees, asylum seekers, … (possibly excluding those who were not given a residence permit). This does not appear to be the case, however, with sources pointing specifically to a group in transit between asylum seeker/refugee and regular resident—to boot with a great vagueness about the details.

*With a great degree of uncertainty as to what “newly” implied. Would e.g. an immigrant remain nyanländ until his death/until he left again, or would his status expire after some time or once a certain other condition had been met?

For instance, the government agency Migrationsverket claims*:

*Here and elsewhere, my translations can be a bit approximate due to both reasons of idiom and the use of many words with a technical meaning. I deliberately keep “nyanländ” and variations thereof untranslated, since these text deal with definitions or explanations of that term. I will translate the Swedish word “person” with its English cognate through-out; however, I note that this is not necessarily the expected formulation in corresponding English texts, which might opt for e.g. “individual”, while many every-day formulations (in the plural) might opt for “people”.

En nyanländ person är någon som är mottagen i en kommun och har beviljats uppehållstillstånd för bosättning på grund av flyktingskäl eller andra skyddsskäl. Även anhöriga till dessa personer anses vara nyanlända. En person är nyanländ under tiden som han eller hon omfattas av lagen om etableringsinsatser, det vill säga två till tre år.

(A nyanländ person is someone who has been received in [by?] a municipality and has been granted a residence permit due to refugee reasons or other protective reasons. Next of kin to these persons are also considered nyanlända. A person is nyanländ during the time he or she is covered by [a specific Swedish law], i.e. two or three years.)

This presumes to redefine not merely “nyanländ” but “nyanländ person”!* To boot, this new meaning is not merely limiting the original meaning, it is in parts contradictory to it, because the same individual can be counted as not nyanländ when he is newly arrived, and then become nyanländ when some time, possibly months, has passed… The tie to a specific law is also problematic, e.g. because laws can change or be abolished, and because it should not be the role of the law to define the meanings of words outside of highly legal contexts. I note that regular immigrants are counter-intuitively not included in this meaning.

*This would e.g. make it impossible to speak of “newly arrived persons” when they are tourists stepping of an airplane or citizens stepping of a train. One might make the argument that this should be seen as an ad hoc definition for a specific context (which is both legitimate and commonly occurring); however, this argument falls on the extremely common use in Swedish media and by Swedish politicians. (And presumably in corresponding discussions within the population.)

The paper Metro has an article that explains the difference between various terms, claiming to draw on official sources. It says*:

*In the section with apparently more formal definitions at the end. The article contains several variations in the main text that I have not additionally analyzed.

Nyanländ – En person som har fått uppehållstillstånd i Sverige, blivit kommunplacerad och fått svenskt personnummer. Personen är alltså inte längre asylsökande och har fått rätt att stanna i Sverige. Under två år kan en nyanländ ansöka om etableringsersättning.

Nyanländ – A person who has received a residence permit in Sweden, been assigned to a municipality, and has received a Swedish [personnummer: an official identifier, similar-but-not-identical to a U.S. social-security number]. The person is, correspondingly, no longer an asylum seeker and has the right to remain in Sweden. For two years, a nyanländ can receive [a specific subsidy].

Note that this attempt at definition makes no mention of the above law, that the restriction in time only covers the subsidy (not the status as nyanländ), and that we have a tie to having been an asylum seeker, which is not necessarily (depending on interpretation) present in the first definition. To boot, at least the mention of personnummer is unclear in its compatibility. (The awarding of a personnummer could conceivably be an automatic effect of something else or it could be a new constraint. I have not researched this further.)

The same article quotes one Pierre Karatzian from Migrationsverket as having said:

Begreppet nyanlända används i olika sammanhang och det är oklart om det finns någon tydlig definition. Ibland används det för att beskriva personer som fått ett uppehållstillstånd.

(The concept nyanlända is used in different context and it is unclear whether some precise definition exists. Sometimes, it is used to describe people who have received a residence permit.)

This use, obviously, would typically include regular immigrants (with some reservations for EU citizens, who underlie special rules).

Swedish Wikipedia does not have a dedicated article, but Swedish Wiktionary has an entry. This entry gives two explanations:

som nyligen har anlänt

(who* has recently arrived)

*This is an imperfect translation of “som”, which has no true equivalent in English. This based on the contextual assumption of “NÅGON som nyligen har anlänt” and the correct translation “SOMEONE who has recently arrived”.

This amounts to the old and established use.

flykting som fått uppehållstillstånd

(refugee who has received a residence permit)

This is a variation of the new use, but which drops the asylum-seeker aspect and focuses directly on refugees. (Most refugees with a residence permit are likely to have applied for asylum, but it is not a logical necessity and there are likely practical exceptions.)

The entry continues by quoting the same page from Migrationsverket that I use above, leading to further questions of interpretation.

To complicate matters, English Wiktionary also has an entry. This entry gives “newly arrived” as a generic translation (compatible with the old and standard use), and then provides an example that lands half-way between the uses:

Den nyanlände invandraren ska studera svenska för invandrare, SFI.

The newly arrived immigrant must study Swedish for Immigrants, SFI.*

*The disputable translation is part of the quote. While “ska” can intend “must”, it is rare; and “is going to” is considerably more likely. Depending on context, “shall”, “should”, “will”, or “wants to” could also be a better translation in terms of intent or idiom. In contrast, “måste” would be correctly translated by “must”.

Note that this example deals with immigrants in general; not (or not specifically*) refugees or asylum seekers.

*Whether these form a subset of immigrants or an independent group can depend on the source.

Excursion on Sweden, SD, and changing attitudes towards immigration:
A quite interesting development is that the party SD was originally seen with great aversion, often even hatred, mostly for its take on immigration and related issues. (Cf. e.g. [4], [5].) Currently, however, it is in contention for the position of Sweden’s largest party, and I seem to recall a recent survey which placed it as a clear number one specifically in the area of immigration policy, with roughly one third of the survey takers rating its policy the highest.

As for the reason for this change, I can only speculate. However, the immense* influx of foreigners to Sweden has likely led to changing opinions, e.g. in that someone who supports free migration in principle now sees a need for pragmatic restrictions for reasons of sustainability. Another reason could be that the presence of SD has made the topic acceptable again and that more people are willing to stand up for an unpopular, non-PC opinion. (Cf. ideas like the Overton window, on which I am planning a text.)

*The influx has been around one percent of the population per year in the last one or two decades; with an upwards trend and a massive additional increase during the recent European migrant crisis. I briefly looked for more detailed numbers, but found this tricky, due to factors like numbers being outdated (and obviously changing over time), the inclusion or exclusion of refugees being unclear, unclear treatment of Swedish emigrants who are now returning, whatnot; and I am not confident in being more specific. To this must be added that these numbers only reflect migration—not demographic changes through higher/lower birth and death rates in various groups.

Written by michaeleriksson

July 4, 2018 at 10:24 pm

A few guidelines on when not to use “feminist”

with one comment

The word (and, by implication, the associated concept) “feminist” and its variations are extremely overused. A few, likely incomplete, guidelines for when not to use it:

  1. Never use the word to refer to someone who does not self-identify as feminist.

    Note particularly that many (including women, including those who see men and women as equal) see the word as an insult. For this reason, particular care should be taken with those who are already dead or will be otherwise unable to defend themselves against what can amount to an accusation.

    Even among those who do not, the use is often too speculative or commits the fellow-traveler fallacy (which I recommend keeping in mind through-out this post).

    A fortiori, never use the word about someone who died before the word was coined (preferably, became mainstream with a stable meaning)*. This is particularly important, because by associating it self with successful or important women from the past, many of which might have viewed it as absurd, feminism can create an unduly positive perception of it self.

    *The earliest mentions, in a somewhat current sense, appear to have been in the 1890s. A more reasonable cut-off might be the 1949 publication of de Beauvoir’s The Second Sex, which arguably brought a change of character in the women’s right movements; and at which time there had been considerable changes in women’s opportunities and rights through e.g. WWII and various law changes in various countries, and the word had reached a greater popularity than in the 1890s. Beware that the spread of the word necessarily progressed differently in different countries.

  2. Be cautious about applying the word to someone who does self-identify as feminist, but is unlikely to be fully aware of the implications. Notably, every second young actress appears to self-identify as feminist, without having any actual understanding, instead being “feminist” because it is what is expected of the “enlightened” or because they have fallen into one of the traps of meaning discussed below. To boot, they give reason to suspect me-too-ism.

    More generally, a disturbing amount of supporters of feminism fall into the category of “useful idiots”, e.g. through declaring themselves supporters after uncritically accepting faulty claims by feminist propagandists. Obviously, however, a significant portion of these do qualify as feminists. (By analogy, someone who follows a certain religion based on flawed evidence should still be considered a follower, while someone who has misunderstood what the religion teaches often should not.)

  3. Never use the word because someone supports equality between the sexes. Very many non-feminists do to; very many feminists do not*.

    *Contrary to their regular self-portrayal, which has even lead to some grossly misleading dictionary definitions. Notably, the red thread of the feminist movement has been women’s rights, which only coincides with a fight for equality in a world where women are sufficiently disadvantaged. The inappropriateness of this self-portrayal is manifestly obvious when we look at e.g. today’s Sweden, where men now form the disadvantaged sex and feminist still clamor for more rights for women—but hardly ever mention rights of men or equal responsibilities for both sexes.

    Notably, I believe in equality and very clearly identify as anti-feminist. Cf. e.g. an older post.

    Equating “wanting equality” with “feminism” is comparable to equating “wanting freedom” with “liberalism” or “wanting [socio-economic] equality” with “communism”. (However, there is an interesting parallel between feminism and the political left in that both seem to focus mostly on “equality of outcome”, which is of course not equality at all, seeing that it is incompatible with “equality of opportunity”, except under extreme and contrary-to-science tabula-rasa assumptions.)

  4. Never use the word because someone believes in strong women, takes women seriously, writes fiction with a focus on women or showing women in power, or similar.

    None of this has any actual bearing on whether someone is a feminist or not. Indeed, much of feminist rhetoric seems based on the assumption that women are weak, in need of protection, unable to make their own minds up*, unable to make sexual decisions for themselves, and similar.

    *Or, make their minds up correctly, i.e. in accordance with the opinion that feminists believe that they should have. (For instance, through not professing themselves to be feminists, or through prefering to be house-wifes.)

  5. Never use the word because someone agrees with feminists on a small number of core issues, even if these have symbolic value within the feminist movement.

    For instance, it is perfectly possible to have a very liberal stance on abortion without otherwise being a feminist. (And feminists that oppose abortion, e.g. for religious reason, exist too, even though they might be considerably rarer.)

Addressing the issue from the opposite direction, it would be good to give guidelines on when the word should be applied. This, however, is tricky, seeing that there is a considerable heterogeneity within the movement. An indisputably safe area, however, is that of gender-feminism, which has dominated feminist self-representation, reporting, politics, …, for decades, and likely has the largest number of adherents once non-feminists (per the above) and useful idiots are discounted. The use can with a high degree of likelihood safely be extended to variations that are otherwise strongly rooted in quasi-Marxism, a tabula-rasa model of the human mind, and/or de Beauvoir’s writings.*

*With the reservation that we, for some aspects, might have to differ between those who actually apply a certain criticism or whatnot to the modern society and those who merely do so when looking at past societies.

I personally do not use it to refer to e.g. “equity feminism”, which is so contradictory to gender-feminism as to border on an oxymoron—and I strongly advise others to follow my example, for reasons that include the risk of bagatellizing or legitimizing gender-feminism through “innocence by association” and, vice versa, demonizing “equity feminists” through guilt by association. However, the case is less clear-cut, in either direction, than the cases discussed above.

Written by michaeleriksson

June 10, 2018 at 1:02 pm

Posted in Uncategorized

Tagged with , ,

Abuse of “they” as a generic singular

with 2 comments

Preamble:
I had gathered ideas and individual paragraphs for this post for a few weeks without actually getting to the point of writing it. In order to finally get it done today, I have pushed quite a lot of what should or could have been parts of the integrated text into detached excursions at the end of the text and made some other compromises in terms of structure and contents.

One of the greatest annoyances in current English is the growing* tendency to abuse “they” as a generic third-person singular (including secondary forms, notably “their”). Below I will discuss some of the reasons why this abuse is a bad idea and give alternatives for those who (misguided, naive of the history of English, and/or unable to understand the abstractness of a language) oppose to use of “he” in the same role. An older article on “gender-neutral language” covers some other aspects, usually on a more abstract level (and some of the same ground; however, I have tried not to be too duplicative). Some other articles, including one on language change, etc., might also be of interest in context.

*While this has a fairly long history, I regularly saw people being corrected for committing this error even some five or ten years ago. To boot, cf. below, there are strong reasons to suspect that the main motivation has changed from simple ignorance or sloppiness to a deliberate abuse for PC reasons.

Below, I will largely discuss practical aspects. Before I do so, I am going to make a stand and call this abuse (when done for PC reasons) outright offensive.* It offends me, and it should offend anyone who cares about language and anyone who opposes political manipulation through newspeak. More: This is not just a question of good language or newspeak. The abuse of “they” is also a direct insult towards significant parts of the population, who are implicitly told that they are that easy to manipulate, that they and their own opinions matter so little that they deserve such manipulation, and that they need to be protected from the imaginary evils of “gendered language”. Moreover, this abuse is** often dehumanizing and deinvidualizing, in a manner disturbingly similar to what took place in the dystopian novella “Anthem”.

*I am normally very careful when it comes to words like “offensive”—unlike the PC crowd I actually understand the aspects of subjectiveness involved and how misguided such argumentation usually is. However, since “offensiveness” is used by them in such a systematic and, mostly, irrational and unjustifiable manner, I will not hold back in this case.

**At least if we were to apply PC “logic” in reverse, which, again, is something that I would likely not do, had the PC crowd not gone to their extreme excesses.

Now, discounting the evils of PC abuse, per se, the worst thing about abusing “they” is the risk of entirely unnecessary confusion and misunderstandings*: In a very high proportion of the cases I encounter, additional context or even guesswork is needed to connect “they” with the right entity/-ies; often this choice is contrary to what would be grammatically expected; occasionally there is so much ambiguity that it is impossible to be certain what was meant. Consider something like “My friend went with Jack and Jill to see their parents”: Unless they are all siblings (or went to see multiple sets of parents), this really must mean that they went to see the parents of Jack and Jill; however, in a modern PC text, it could just as easily be the friend’s parents. Or take something like “Monopoly is played by two to six players, one of which is the bank. They [the `bank’] handle most of the money.”: Without already knowing the rules, the second sentence is impossible to understand when “they” is abused (and stating something untrue when it is used correctly).

*There are situations where ambiguities can arise even when using correct grammar, especially with a sloppy author/speaker; however, the proportion is considerably lower, the probability that the ambiguities are resolved through context is higher, and the added confusion caused by the uncertainty whether a given author/speaker abuses “they” is absent. (Note that the argument that “if everyone spoke PC this would not be a problem” is flawed through failing to consider the great number of existing texts as well as the necessarily different adoption rates in different countries and generations.)

A few days ago, I encountered a particularly weird example, in the form of an error message, when I was trying to clean-up unnecessary groups and users* on my computer:

*In Unix-like systems, “users” (accounts) can be assigned “groups”. With extremely few exceptions, every user should correspond to at most one physical user. (Some users are purely technical and do not have any physical user at all.) A group, however, can be assigned to arbitrarily many users and, by implication, arbitrarily many physical users. As a special case, it is common for every user to be a member, often the sole member, of a group with the same name as the user name. Below, this is the case for the user “gnats”.

/usr/sbin/delgroup: `gnats’ still has `gnats’ as their primary group!

Here it is impossible to delete the group “gnats”, because the user “gnats” belongs to this group; however, this fact is obscured through the incompetent error message that uses “their”, giving the impression that the group is meant… In many cases, say with the user “gnats” and the group “audio”, this would not have been the end of the world, but when the names coincide, it is a horror, and interpretation requires more knowledge about the internals of the system than most modern users will have. This example is the more idiotic, because the pronoun is entirely unnecessary: “[…] as primary group!” would have done just fine. Even given that a pronoun was wanted, “its” would be the obvious first choice to someone even semi-literate, seeing that the user “gnats” is an obvious it*—regardless of whether the physical** user behind it is a he or a she.

*Similarly, a bank account remains an “it”, regardless of the sex of the account owner.

**As case has it, “gnats” is one of the users that do not have a physical user at all (cf. above footnote), making “it” the more indisputable.

The use of “their” instead of “its” is just one example of the many perverted abuses that occur. A very similar case is using “they” instead of “it” for an animal*. Mixing “one” and “they” is yet another (e.g. “one should always do their duty”, which would only be correct if “their” refers to some people other than the “one” ). A particular extreme perversion is using “they” when the sex of the person involved is actually known (or a necessity from context), as e.g. in “my friend liked the movie; they want to see it again”.**

*Whether “it” is more logical than “he”/“she” for an animal can be disputed, but it is the established rule. Going with “they” over “it” gives only disadvantages. (Even the pseudo-advantage of “gender neutrality” does not apply, because “it” already had that covered.)

**As aside, there might be some PC-extremists that actually deliberately use such formulations, because they see every sign of sex (race, nationality, religion, …) as not only irrelevant in any context, but as outright harmful, because “it could strengthen stereotypes”, or similar. Not only would this be a fanaticism that goes beyond anything defensible, it also severely damages communications: Such information is important in very many contexts, because these characteristics do have an effect in these contexts. (And it is certainly not for one party do selectively decide which of these contexts are relevant and which not.) For instance, if someone cries, the typical implications for a male and a female (or a child and an adult) are very different. Ditto, if a catholic and a protestant marriage is terminated. Etc.

Assuming that someone absolutely does not want to use “he”, there is still no need to abuse “they”. Alternatives include:*

*What alternatives are usable when can depend on the specifics of the individual case. I can, however, not recall one single abuse that could not be resolved better in at least one way. Note that I have not included variations like “he or she” or “(s)he” in the below. While these are better than “they”, and can certainly be used, they are also fairly clumsy and the below works without such clumsiness. (I have no sympathies at all for solutions like using “he” in odd-numbered chapters and “she” in even-numbered ones. They bring little value; do not solve the underlying problem, be it real or imagined; and, frankly, strike me as childish.)

  1. Use a strict plural through-out, e.g. by replacing “everyone who wants to come should bring their own beverages” with “those who want to come should bring their own beverages”.
  2. Using “one” (but, cf. above, doing it properly!), e.g. by replacing “everyone should be true to themself” with “one should be true to oneself”.
  3. Similarly, rarely* using “you”, e.g. by having “you should be true to yourself” as the replacement in the previous item.

    *Cf. another older article why “you” is usually best avoided (for completely different reasons).

  4. Using “who” or another relative pronoun, e.g. by replacing “My friend is nice. They came to help me.” with “my friend, who came to help me, is nice”.*

    *But in this specific example, the sex is known and it would be better yet to use “he” or “she” as appropriate. This applies equally in any other examples where the sex is known.

  5. Avoiding the pronoun altogether, e.g. by replacing “every student should bring their chosen book” with “every student should bring a chosen book”, or “someone asked me to describe the painting to them” with “someone asked me to describe the painting”.
  6. Using the passive, e.g. by replacing “they* brought the horses back to the stable” with “the horses were brought back to the stable”. (If there is fear of information loss, we could append a suitable “by X” at the end of the replacement, just making sure that “X” is not “them”.)

    *Assuming that this is intended as a singular. If “they” is actually used for a plural, it is perfectly fine.

  7. In many cases, it is possible to use either “he” or “she” as a semi-generic singular from context. For instance, when generalizing based or semi-based on a man/woman, “he”/“she” can often be used accordingly without losing much genericness and without upsetting any but the most extremist of the PC crowd. For instance, “If a beginner like you cannot succeed, they should still try.” would be better as (male counter-part) “[…], he […]” resp. (female counter-part) “[…], she […]”.

    (Of course, when all of those we generalize to belong to a single sex, the appropriate of “he” and “she” should be used, analogously to the Thalidomide example below.)

Excursion on “it” vs. “they”:
Using “it” rather than “they” (as a replacement for “he”) would have made much more sense, seeing that it actually is a singular and that it actually is in the neutral gender*. Many of the arguments against “they” would still apply, but if someone really, really wanted to use an existing word as a replacement, “it” really is the obvious choice. I could have had some understanding and sympathy for “it”, but “they” is not just idiotic—it is obviously idiotic.

*“They” has some (all?) characteristics of a neutral gender in English, but whether it actually is one is partly depending on perspective. In English, it might be better to consider it a mix-gender form; in other languages, there might be different words for a third-person plural depending on the grammatical genders of the group members; whatnot.

The somewhat similar (but off-topic) question of whether to use “it” or “they” for e.g. a team, a company, or a band is less clear-cut. I would weakly recommend “it” as the usually more logical alternative, as well as the alternative less likely to cause confusion; however, in some cases “they” can be better, and I probably use “they” more often in my own practical use.

Excursion on “everyone”, etc.:
Errors that originate in ignorance or sloppiness are far more tolerable than those that originate from PC abuse. The most common (relating to “they”) is probably to take “everyone” to be a grammatical plural (logically, it often is; grammatically, never), resulting in sentences like “everyone were happy with their choices”, which is almost OK and unlikely to cause confusion considerably more often than a strictly correct sentence. In contrast, a PC abuse would result in “everyone was happy with their choice”, which is ripe with possibility for misunderstanding.

Excursion on PC language in general:
It is not uncommon that other attempts to “be PC” or “gender-neutral” in language cause easily avoidable problems. For instance, parallel to writing this post I skimmed the Wikipedia article on Thalidomide, which among other claims contained “Thalidomide should not be used by people who are breast feeding or pregnant, trying to conceive a child, or cannot or will not follow the risk management program to prevent pregnancies.”—leaving me severely confused. Obviously, if we look at “breast feeding or pregnant”, this still necessarily* refers only to women**—but what about the rest of the sentence? If a man tries to conceive a child with his wife, does he too have to stay clear of Thalidomide?*** If the author of the sentence had left political correctness (and/or sloppiness) at home and spoken of “women” instead of “people” where only women were concerned, and then of “people” where both sexes were concerned, there would have been no problem present. This is the more serious, as such pages will inevitably be used for medical consultation from time to time—no matter how much their unsuitability for such purposes is stressed.

*There are rare cases of men lactating, but I have never even heard of this being used for breast feeding. If it has happened, it is too extraordinarily rare to warrant consideration here.

**Implying that speaking of “people” would be at best misguided and unnecessary, even for this first part. However, since no actual confusion or miscommunication is likely to result, this alone would be forgivable.

***Later parts of the page make clear, very contrary to my expectations, that men are included, “as the drug can be transmitted in sperm”. (I still suspect, however, that the risks are smaller for men than women, due to the smaller exposure from the fetus point of view.)

Excursion on Wikipedia:
Wikipedia, which used to be exemplary in its use of language (and strong in other “encyclopedic” characteristics) has degenerated severely over the years, with abuse of “they” being near ubiquitous. Unfortunately, other language problems are quite common; unfortunately other PC problems are quite common, including that an entirely disproportionate number of articles have a section of feminism, the feminist take on the topic, the topic’s relation to feminism, whatnot, somewhere—even when there is no particular relevance to or of feminism. (Including e.g. many articles on films with a section on how the film is interpreted using “feminist” film analysis.)

Excursion on duty to correctness:
Human acquisition and development of language is to a large part imitative. When people around us use incorrect language, there is a considerable risk, especially with young people, that the errors will be infective. For this reason, it could possibly be argued that we have duty to be as correct as possible (within the borders of our own abilities). When it comes to e.g. teachers, TV, news papers, … I would speak of a definite such duty: They have the opportunity to affect and, possibly, infect so many people that it is absurd to be sloppy, especially seeing that many of them have the resources to use professional checkers, e.g. copy editors. (Of course, sadly, these also have other duties like proper research, “fairness in reporting”, and whatnot, that are neglected disturbingly often.)

Excursion on logic of language:
Much of language is illogical or arbitrary, or seems to be so, because of remnants of long-forgotten and no longer used rules; however, much of it is also quite logical and a great shame today is that so many people are so unable to see patterns, rules, consequences, whatnot, that should be obvious.* Failing to keep numbers consistent is one example. Others include absurdities like “fast speed”, “I could care less”, “in the same … with …”, “try and”. That someone slips up on occasion is nothing to be ashamed of—I do too**. However, there are very many whose language is riddled with such errors, and there appear to be a very strong correlations between such errors and low intelligence, poor education, and simply not giving a damn.

*Not to be confused with the many language errors that arise from e.g. not remembering the spelling of a certain word, having misunderstood what a word means, not knowing the right grammatical rule, … These are usually easier to forgive, being signals of lack of knowledge rather than inability to think. Other classes of errors not included are simple slips of the pen/keyboard and deliberate violations, say the inexcusable practice of abusing full stops to keep the nominal length of a sentence down, even at the cost of both hacking the sentence to pieces that cannot stand alone and making it harder to understand.

**I have a particular weak spot for words that sound similar, e.g. “to”, “too”, and (occasionally) “two”: Even being perfectly aware of which is the correct in a given context, I sometimes pick the wrong one through some weird automatism. The difference between a plural and a possessive “s”-suffix is another frequent obstacle.

Written by michaeleriksson

May 27, 2018 at 7:41 am