Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘language

A few notes on my language errors II

leave a comment »

Re-reading a text on experiences in Sweden, I found an example that simultaneously illustrates two problem areas: “false friends”* and a weaker knowledge of words for everyday items (or, more generally, a knowledge that varies with the domain). Specifically, I wanted to translate the Swedish “kartong” (“carton”) and jumped straight to “cartoon”… The mistake is understandable, seeing that all three words are derived from the French “carton” or ultimately the Italian “cartone”. The result is still border-line hilarious—and this is a mistake that a native speaker would be unlikely to make. Notably, there is a wide range of words that most native speakers learn as children and that only rarely feature outside e.g. home settings, implying that non-natives are unlikely to pick them up from language courses, science books, fiction,** whatnot.

*I.e. words from different languages that sound/look as if they mean the same thing, but where the actual meaning is different. However, to me, this is normally a greater problem between German and Swedish than a constellation involving English, because the languages are more similar. This includes many cases of words that used to mean the same but have since drifted apart. For instance, when, probably, my mother once complained that I was still unmarried, I tried the excuse that there were too few women at work—and, with the German “Frauen” (“women”) in mind I spoke of “fruar” (“wives”). (A closely related issue, if not “false friends” in a strict sense, is the many words in German that sound/look as if they would have an immediate equivalent in Swedish (or vice versa) but do not, or where there is an almost immediate equivalent with a slightly unexpected shape. Consider e.g. the Swedish “avlasta”, where a naive translator might try a faux-German “ablasten” instead of the correct “entlasten”.)

**Much unlike e.g. “homicide”, “evidence”, “subpoena”, …

More generally, knowledge of a language is often strongly domain dependent, depending on factors like what we have read and what fields we have worked in. I, e.g., am weaker with kitchen and “home” terminology in German and English than in Swedish, due to my Swedish childhood; but stronger with computer terminology, due to my German work-experiences and my English readings. Quite often, I have found myself in a situation where I am well aware of the word for a certain concept in one language but lack the same word in another, depending on what type of readings has created the awareness.*

*This is sometimes noticeable in that I use lengthier formulations or awkward terminology in one discussion and better terminology (for the same concept) a few years later. In some cases, e.g. “identity politics”, I have been aware of the concept before I learned the phrase in any language.

The “carton”–“cartoon” mix-up is not a case of confusing sound-alike words (a problem mentioned in the the first installment). In doubt, the “-ton” and “-toon” parts of the respective word are quite far apart in pronunciation. Instead, it was either a matter of having the right word in mind and not having a sufficient awareness of the spelling, or of grabbing the “false friend” instead of the correct word with too little reflection. (To tell for certain after more than a month is hard.)

In contrast, my mistaken use of “shelve”* for “shelf” is at least partially a sound issue (partially a “not good with home terminology” issue), although of a less unconscious kind: I was uncertain whether the singular of “shelves” was “shelve” or “shelf”, decided to go with “shelve” and to let the spell-checker correct me as needed—overlooking that there is a verb “to shelve”… (Implying that the spell-checker saw “shelve” as a correct spelling, being unable to tell from context that a noun was intended. Actually researching the spelling through the Internet would have given me the correct answer in a matter of seconds…) More generally, the question of “f” vs “v[e]” is often a problem, including my often forgetting the switch to “v” in a plural (e.g. “lifes” instead of “lives” as a plural of “life”) and hypercorrecting (e.g. “believes” instead of “beliefs” as plural of “belief”).

*In a number of recent texts relating to my attempts to buy shelves online, e.g. [1].

Advertisements

Written by michaeleriksson

April 30, 2019 at 1:52 pm

A few notes on my language errors

with one comment

When proof- or re-reading my own texts, I am often annoyed by the number of language errors that I make, even discounting those relating to ignorance* and sloppy typing**. Below, I will discuss some issues that I have seen repeatedly recently.

*I am not a native speaker, and my understanding of the rules of English can have weird holes. For instance, it was only fairly recently that I realized that “one’s” (“someone’s”, etc.) takes an apostrophe (as opposed to “ones”, “someones”, etc.) in standard English. I also often rely on my spell-checker to find problems with words that I have only used actively on rare occasions. To boot, my own opinion on certain less regulated language questions develops over time, e.g. in that I earlier used “may” quite often to indicate a “might”, “could”, or similar—but now consider this poor style, because of the loss of precision.

**While I only very rarely pick the wrong character, I can get characters turned around (e.g. “on” instead of “no”) and occasionally pick the entirely wrong syllable (e.g. “-er” instead of “-ing”). And, yes, I would count the latter as sloppy typing in my case, because it is not a conscious choice but more of “crossed wires” at some point in the transfer from brain to computer—I pick the wrong set of keys where a less experienced typist might pick the wrong individual key. On occasion, my fingers type an entirely different word than I had in my mind.

The influence of pronunciation* is particularly frustrating, e.g. in that I might mix-up “two”, “too”, and “to”—despite having a firm grasp of when which should be used. It seems that the influence of the similarity in sound often tricks my fingers when typing and my eyes when proof-reading. This is likely an area where being a fast typist and reader is actually a disadvantage, because I spend less time on each word (compared to someone slower) and am less likely to notice such differences. Generally, proof-reading is hard for me**, because of the problems with keeping myself concentrated and suppressing the temptation to read faster.

*Beware that I might be more vulnerable to this as a non-native speaker, because different languages have different rules for pronunciation and phonetical “minimal pairs”.

**Here I found myself writing “more” instead of “me” in an example of the crossed-wire issue mentioned above—somehow, a spurious “or” was inserted.

Late stage changes and additions to a text are often stumbling blocks: The parts of the original draft that remain until publication have been proof-read at least twice (often more)—but the changes made during proof-reading, the new thoughts added after the first draft, the reformulations made because the original was too clunky,* etc., will have gone through fewer stages of checks. Factor in how boring proof-reading is, and a last-minute change might even end up with a single skimming in lieu of proper proof-reading. Sometimes these errors can distort the text, as with a recent use of “net”**: I originally wrote an example in terms of net income/profit, but decided that it made more sense to start with revenue, re-wrote the example correspondingly—and left a “net” in. This causes the numbers used in the example to seem incompatible with each other. In German, with its more complicated grammar, I often have problems like a change of words leading to a change of gender, which would require different suffixes on other words in the same sentence or the use of differently gendered pronouns (possibly, in other sentences)—but where I fail to make all of the secondary changes.***

*Yes, even I have a limit…

**The error is still present at the time of writing, but I might edit the text at a later date. I have refrained from doing so, so far, because I do not trust WordPress’ editing functionality. The same applies to other examples.

***Consider the differences between “Ich habe ein kleines Tier gesehen. Es war braun.” and “Ich habe einen kleinen Hund gesehen. Er war braun.”, and contrast this with the identical English surroundings: “I saw a small [animal/dog]. It was brown.” (However, something similar can happen with at least “a” vs. “an” in English too, e.g. “a dog” vs. “an animal”.)

The imitative character of language learning (see excursion) has often led me astray—to the point that I might find myself unconsciously making the same mistakes that I criticize in others. For instance, I have condemned linking on the word “here”* as naive (and stand by that text!), but found that I have myself made this mistake on a few occasions. (E.g. in blogroll** updates, where I have repeatedly used formulations like “That link was first described here.”, with a link on “here”.)

*Which I would see as a part of language use in the context of hyper-text.

**Originally, I wrote “role” instead of “roll”.

While I have repeatedly complained about how people screw up the “linguistic logic”* of their sentences, I am not infallible myself. For instance, I recently wrote “not entirely unsurprisingly” in a context where “not entirely surprisingly” was the actual intention. I should have stuck with a plain “unsurprisingly”, which had been less likely to cause confusion for writer and reader alike.

*E.g. through screwed up negations (as above), use of phrases like “fast speed” (a car is fast; its speed is great), or some examples from an earlier discussion.

A quite surprising problem area is line-breaks: If a line-break takes place after a (usually) one-syllable word, I often type this word again after the line-break. Likely, the end-of-the-line suffers some variation of “out of sight, out of mind”, with the result that I fail to recognize that I had already typed the word once. More rarely, the opposite happens and the word is left out entirely; however, this could be unrelated to the line-break, as it happens in other parts of the text too*.

*Just wrote “two”…

Excursion on imitation:
Human language is naturally learned by imitation, and humans seem to be strongly geared towards such imitation. This to the point that I have occasionally found myself correctly using words that I did not know (at least, on a conscious level). This imitative character can have many negative effects, including that people make incorrect assumptions about what a word means (e.g. “decimate”, “discriminate”, “petrified”), use words in a manner that causes a drift in meaning over time (e.g. “discriminate”; possibly, “decimate”); or pick up weird language errors that would have been obviously incorrect to someone who had stopped to think (e.g. “I could care less” or “literally” to imply the exact opposite of what is actually said). Correspondingly, those whose language reaches a greater number of people should see it as their duty to speak and write as correctly as possible, be they authors*, teachers, journalists, politicians, … Similarly, parents should take care when speaking to their children, lest they pick up poor habits from the beginning. In particular, they should avoid deliberate “baby words” like “doggy” and “bowwow”.

*A complication is the compromise between correct/standard/whatnot and realistic speech by fictional characters. Unless the author wishes to put heavy emphasis on some quality of a character (e.g. that he is unusually stupid or belong to a different dialectal/sociolectal/whatnot group than the main characters), I recommend erring on the side of the correct, e.g. through assuming that this particular member of a certain group is one of the more well-read and educated—the variation between e.g. construction workers on the same building site can be quite large.

Written by michaeleriksson

April 24, 2019 at 11:44 pm

A few thoughts on the word “gender”

leave a comment »

The abuse of “gender” for “sex” has long annoyed me, but I have taken the view that the use for “self-perceived sexual identity” (or similar) was acceptable or even beneficial—if nothing else, the latter is a separate concept and using a separate term for a separate concept is usually a good idea.* However, I unconsciously based this view on a faulty premise: that the grammatical gender was inherently a division into masculine/feminine/neuter or something along similar lines (e.g. just masculine/feminine or masculine/feminine/neuter/common; while an apparently genderless language can equally be viewed as having exactly one gender**). Using this premise, an application to similar*** divisions in other areas would not be absurd—if a good word was not already present.

*Indeed, one of my more common complaints about the PC crowd is the high-jacking of words to mean something different from what would be historically expected and/or expected among other speakers, e.g. (in the same area) that “man” and “woman” would refer to self-perception instead of biology. It would be much better to introduce new words for these new concepts. Even worse is deliberate re-/mis-definition for purposes like manipulation, as with e.g. “racism” and “rape” in some circles.

**At least, assuming that it follows a pattern somewhat similar to the typical Indo-European languages, as e.g. a version of English where “they” (“them”, etc.) was abused as a full replacement for “he”, “she”, “it” (“him”, etc.)—which is where, regrettably, English seems be heading. (The abuse as a generic third-person singular is already dominant.) A sufficiently different language might behave too differently (but is then unlikely to be relevant in this context).

***I stress that e.g. a grammatical “masculinity” does not automatically imply a physical or biological “masculinity”, which is obvious from languages with a more differentiated system than English—hence, “similar” above. This differentiation is another reason not to use “gender” for “sex”—grammatical gender and biological sex are not always coinciding. (In German, words for things can be grammatically masculine, feminine, or neutral, even when a logical neutral might be expected. Words applied to men can be feminine (e.g. “die Person”/“the person”); words applied to women can be masculine (e.g. “der Mensch”/“the human”); words for either can be neutral (e.g. “das Individuum”/“the individual”. Of course, the gender changes based on what word is used—not based on the entity referred to.)

This, however, is not strictly the case: it happens to be true in many languages, including English and German, but other divisions are possible. For instance, Proto-Indo-European might have had an animate/inanimate division. Even my native Swedish deviates through a somewhat arbitrary division into utrum and neutrum:* The members of these genders, for all practical and modern purposes, only differ in what indefinite (“en”/“ett”) and definite (“den”/“det”) article is used and whether an “-en” or an “-et” is to be suffixed in certain situations.** Indeed, they were more often referred to as “en-ord” och “ett-ord” (“ord” = “word(s)”) than “utrum” and “neutrum” in school.

*The discussion of actual Swedish grammar in school was superficial, incomplete, or even incorrect—a problem that native speakers of other languages might also have encountered. For this reason, I had simply never really reflected on the implications of the Swedish deviation until today. As an added complication, there are several different perspectives on Swedish genders (above, I discuss the most common) and the situation was historically different.

**E.g. “en sak”/“a thing” vs. “ett träd”/“a tree” and “den saken”/“that thing” vs “det trädet”/“that tree”.

Looking outside of grammar, there have been many uses of the word “gender” that also follow the line of a more general classification, e.g. that being English/German/whatnot or belonging to a certain family was discussed in terms of “gender”. Older use for sexual division (e.g. “the female gender”) is just a special case of this, and not* a precedent for a specialized use relating to sex or sexual identity. This makes it the more illogical to use “gender” when it is actually the sex (or even sexual identity) that is intended: a “Sex:” on a driver’s license calls for “M[ale]” or “F[emale]” with some clarity**, while “Gender:” might equally call for “E[nglish]”.

*Similarly, the fact that we could speak of someone being of the “female persuasion” does not make “persuasion” a good replacement for “sex”, because we can equally combine “persuasion” with other words implying group membership. Note that this applies to a wide range of other words too, e.g. “class”, “set”, “category”. (If it had just been “persuasion”, it might have been rejected as an abuse, or something to restrict to humorous formulations, for other reasons. The choice of “persuasion” as an example is based on the higher frequency of “female persuasion” over, say, “female category”.)

**Or at least it used to… However, even for those who cannot or does not want to be classified as male or female, the type of the classification is clear. On the other hand, confusion with sexual acts is highly unlikely outside of the famous joke about the girl who found her mother’s driver’s license (“Mommy! I know why Dad divorced you! You got an ‘F’ in sex!”).

I re-iterate my recommendation never, ever to use “gender” when “sex” is the traditional word. When it comes to sexual identity, the question is trickier because, again, a separate* word makes sense, and I am unable to offer an alternative that is both sufficiently understandable and has a sufficient current use to not cause as much confusion as “gender”*. However, this might be an area where “persuasion” (see earlier footnote) has some possibilities, actually gaining through its more regular meaning in the area of opinions and convictions, e.g. in that the-athlete-previously-known-as-Bruce would be considered of the male sex and the female persuasion.** Possibly, some shortening of “sexual persuasion”, e.g. “sexper” or “seper”, might work as a replacement for “gender” in such an attempt.***

*Another strong argument against the abuse of “gender” for “sex” is that many will assume a reference to sexual identity where biological sex was intended and vice versa.

**Or at least was so “pre-op”. Possibly, additional terminology is needed for the “post-op” case.

***Using an unabbreviated “sexual persuasion” would be too lengthy in many contexts, e.g. on driver’s licenses. It would also risk a dropping of “sexual” in sloppy use, with negative effects on other meanings of “persuasion”—just like “discrimination” and “intercourse” has seen a drift towards using the word solely for a special case implied by a longer phrase. To start with just “persuasion” would be even worse.

Addendum to the linked-to text:
Possibly ten years ago, I wrote “The possibility that existing literature eventually would be actively re-written to adhere to ‘gender-neutrality’ is not at all far-fetched:”. Indeed not: Consider e.g. my (much later) text on distortion of Blyton, where I lament that the actual events and characters of her books, not just specific words, have been altered for similar reasons.

Written by michaeleriksson

April 1, 2019 at 7:47 am

Brief thoughts on the decline of Latin and Greek as a scientific languages

with one comment

When I first, as a child, learned of the use Latin and Greek names for various plants, animals, and whatnot, it was explained to me that this was done to (a) ensure that there was a name that scientists speaking different languages could use and still be understood by each other, (b) still keep the names in a single language.

I am far from certain that this explanation is correct: More likely, the likes of Linnaeus simply started a tradition based on Latin as the then “science language” for his extensive classifications,* which was kept long after Latin lost ground to modern languages.

*Just like I prefer to write in English over Swedish—why not use the language more likely to be understood?

Still, the purported idea is quite sound: Using a single language allows for greater consistency and enables those so interested to actually learn that single language in order to make identification of the item behind the name that much easier;* and Latin has the advantage of lacking** potential for conflict (as might have been the case if English and French or Mandarin and Japanese were pitted against each other).

*Indeed, even a limited knowledge can be a great help, e.g. by knowing a few commonly occurring suffixes and prefixes.

**Or should do so in any sane era: Some politically correct fanatics apparently consider anything relating to “Western Culture” something to be condemned in a blanket manner. Nothing is certain, except for death, taxes, and human stupidity.

Unfortunately, the scientists of old did not stick to Latin, often turning to Greek. (This includes the names of many (most?) dinosaurs.) However this situation was still reasonably tolerable.

Then things started to get out of hand: Over the last few decades names have increasingly been coined in any locale language. For instance, this text was prompted by the recent discovery of the dinosaur genus Ledumahadi—apparently named “a giant thunderclap at dawn”* in Sesotho

*The silliness and apparent lack of “scientificity” of the meaning, however, has little to do with the language. Dinosaurs have been given similarly silly names from the early days of scientific attention (and many less silly names for extinct animals are obscure through e.g. referring to the shape of a tooth). In contrast, many of Linnaeus names could draw directly on existing Latin names for at least the genus (as e.g. with “homo sapiens”—“homo” being the Latin word for human).

Going by Wikipedia, Sesotho has some five or six million native speakers—considerably less than Swedish today and an even smaller proportion of the world population than Sweden in the days of (my fellow Swede) Linnaeus*. If Linnaeus picked Latin over Swedish back then, how can we justify picking Sesotho over both Latin and English today? The idea is contrary to reason.

If someone were to argue that Latin and Greek, specifically, had grown impractical due to the reduced knowledge among today’s scientists, I might have some sympathies. However, if we concluded that they should go, the reasonable thing to do would be to opt for English as the sole language, thereby ensuring the largest global understandibility. If not English, then some other, truly major language, e.g. Mandarin*, Hindi*, or Spanish should have been considered. Sesotho is useless as single language, and not using a single language will end with names that appear entirely random. It will usually even be impossible to know what language a name is in, without additional research, making it that much harder to find out the meaning.

*Here additional thought might be needed on how the names should be written. (Original writing system? Transliterated to the Latin alphabet? Otherwise?)

For those interested in “local” names, there is always the possibility of introducing an everyday name for the local language: Dinosaurs have normally been known by their scientific names even in the general population, but there is no actual law that this must be the case. Call the Ledumahadi “Ledumahadi” in Sesotho and use a Latin or Greek translation* as the scientific name and the default in other languages.

*My limited knowledge does not allow me to make a suggestion.

Written by michaeleriksson

September 29, 2018 at 5:46 pm

Weak justifications for poor language

leave a comment »

When it comes to grammar, common arguments from the “everything goes” school include “there are countless examples of X being correct; ergo, X should always be allowed”, “X is not an error, because Shakespeare used it”, and other analogy claims.

Such arguments are usually faulty through lack of discrimination:* It is quite possible for a certain phrasing, grammatical construct, whatnot to be correct in one situation and incorrect in another—and the analogy must only be used as justification when the circumstances are sufficiently similar. An extreme example is “over-exaggerate”: There are situations in which “over-exaggerate” is a reasonable formulation, but it remains an error of the ignorant in almost all cases. Consider e.g. a politician deliberately exaggerating a problem in order to be more convincing—but doing so to such a degree that he loses believability. He has now over-exaggerated.**

*In the case of e.g. Shakespeare, they also forget that a once valid use might now be outdated; that he, as a poet, might have taken liberties in order to improve rhyme or meter; that his language might have contained dialectal features in a pre-standardization English; and similar.

**Whether such a use of “over-exaggerate[d]” has ever taken place is unknown to me; however, until five minutes before starting this text, I had not even contemplated the possibility that it could ever be anything but wrong—and the rarity of the correctness shows the danger of superficial analogy arguments that much better. (At “five minutes before”, I read the phrase “exaggerating too much” and saw the applicability to “over-exaggerate”.)

A more common example is the use of “and”, “or”, and similar conjunctions at the beginning of a sentence. There are cases where such use could be seen as correct. For instance, “Mary had a little lamb. And everywhere that Mary went, the lamb was sure to go.” would not bring me to the barricades.* I even occasionally use incorrect** such formulations my self, in a manner that I consider acceptable in context. Correspondingly, I cannot condemn a leading “and” in a blanket manner.

*However, I would have preferred “Mary had a little lamb; and everywhere that Mary went, the lamb was sure to go.”, because the full stop implies a strong separation that the “And” then reduces, as if someone was simultaneously pressing down on the gas pedal and braking. (Alternatively, I might have tried to cut the conjunction entirely.) Generally, I always remain a little skeptical: Even when the construct can be argued as grammatically acceptable, there are often reasons of style, logic, coherence, whatnot that speak against it.

**For instance, I might use a leading “And” within brackets in situations where I (a) want to strengthen the connection to the preceding text to overcome the bracket, (b) do not consider the bracketed content important enough for more words or even fear that more words might reduce legibility in context. (Of course, others might argue that if the text was that unimportant, it should have been cut entirely…) Similarly, my footnotes are almost always intended to be read in the immediate context of the main text, and will not always be complete sentences or thoughts without that context—some footnotes and brackets could be seen as a branch on a trunk and only make sense when the branch is entered from the trunk. (Why not forego the bracket + “And”, as another case of simultaneously hitting the gas pedal and braking? Well, the bracket is often beneficial to break out less important or less on-topic thoughts, as with the current. From the point of view of the main text, the bracket serves to separate such parts. However, sometimes the connection with the unbracketed text then becomes too weak from the point of view of the bracketed, and the “And” remedies this. This argument does not hold with Mary and her little lamb.)

However, most practical uses remain both incorrect and unacceptable, and those critical of these constructs do not typically suggest a blanket ban—only a ban of incorrect cases. For instance, where someone with an even semi-decent understanding of English would write “Mary had a little lamb and a goat.”, a journalist or a pre-schooler might write “Mary had a little lamb. And a goat.”, which is incorrect by any reasonable standard.* However, the problem does not reside with the “And”, but with the way a single sentence or thought has been artificially, confusingly,, and unnecessarily divided into two parts, one of which cannot stand on its own. The error is one of interpunctuation—not of what word is allowed where. “Mary went home. And took the lamb with her.”, makes the same mistake, if a bit more subtly. A faulty separation of a subordinate clause is a common variation, and often includes a far wider range of words. Consider e.g. “John went home. Because Mary was sick.”: Both parts contain a complete sentence and the situation might be salvaged by simply removing the “Because” (at the cost of no longer having the causal connection); however, a “because” clause can come both after and before its main clause, which can cause a lot of ambiguity. For instance, how do we know that the intention was not “John went home. Because Mary was sick, Tom also went home.”, with a part of the text missing?** What if the text, as actually given, had read “John went home. Because Mary was sick. Tom also went home.”? Was it John, Tom, or possibly both, who went home because of Mary’s health?

*Notably, the complete-sentence standard; however, see an excursion for an alternate suggestion and more detail.

**This gives another reason to stick to the rules: If a text contains language errors, it is often not clear why; and by deliberately deviating from correct grammar, the ability to detect accidental errors and to deduce the true intended meaning in face of errors is reduced. Equally, a deliberate deviation can make the reader assume an accidental error where none is present, leading to unnecessary speculation. Other examples that can soon become tricky include leaving out “unnecessary” uses of “that”, “unnecessary” commas, and similar. If in doubt, doing so can lead to their exclusion out of habit in a situation where they were definitely needed.

Someone criticizing such sentences usually does so, directly or indirectly, because of the division—of which “And” is just a result. Even if we were to say that sentences are allowed to start with “and”, “or”, whatnot, these sentences would still be wrong, because they still make an absurd and ungrammatical division. As an analogy, if someone has a viral infection accompanied by a fever, the infection does not go away because the patient’s body temperature is declared normal. More generally, we must not focus on superficial criteria, like a temperature or an optical impression of a sentence—we actually have to understand what goes on beneath the surface and we have to ask the right questions. Above, the right question is “Is the interpunctuation correct and reasonable?”—not whether a sentence starts with an “and”.

Excursion on my historical take on “and” et al. and on the reverse mistake:
I my younger days I belonged to the “never acceptable” school, largely through committing the opposite error of “sometimes wrong; ergo, always wrong”—something equally to be avoided. My opinions have become more nuanced over the years. However, I still feel that these constructs should be left to those with a developed understanding, because (a) by simply resolving to never start a sentence with “and” et al., a great number of other mistakes will be far less likely to occur (cf. above), (b) even most grammatically acceptable uses are better solved in other ways (cf. footnote above). I would also argue that a grammar which does categorically forbid these constructs would be perfectly valid and acceptable—it just happens that established English grammar does not. (In contrast, a grammar that allows e.g. “Mary had a little lamb. And a goat.”, while conceivable, would make a mockery of the concepts of full stop and sentence. The purpose of these are to give the reader information about the text not necessarily clear from the words themselves; and it would be a lesser evil to abolish* them entirely than to spread misinformation through them.)

*while interpunctuation is a wonderful thing writing systems tend to start without it uptothepointthatthereisnotevenwordseparation we do not need interpunctuation but do we really want to forego it fr tht mttr nt ll wrtng sstms s vwls still misleading information, is even worse

Excursion on complete sentences:
A typical criterion for the use of full stops is that all sentences are complete, typically containing at a minimum subject and verb. However, I would argue that it is more important to have a thought* of sufficient completeness** and sufficient context to understand that thought. For instance, this is the case when someone takes a fall and says “ouch”; a soldier shouts “incoming” or a surgeon says “scalpel”; a (compatible) question is answered with “yes”, “no”, “probably”, “the red one”, …; one opponent exclaims “son of a bitch” to the other; any number of imperatives are used (“buy me an ice cream ”, “assume that X”); etc. Indeed, a subject–verb criterion might not even make sense in all languages. Many Latin sentences, e.g., will only contain an implicit subject, implying that at least an explicit subject cannot be a universally reasonable criterion. (The English imperatives could also be seen as a case of an implicit subject.)

*I see myself supported by the more original and non-linguistic meanings of “sentence”, which are strongly overlapping with “thought”. Also cf. “sense” and “sentiment”.

**I deliberately avoided “complete thought”, which could imply that the entirety of a thought is expressed. This, in turn, is only rarely the case with a single sentence. (Cf. [1].)

However, these examples are only valid given the right context: Go up to a random person on the street and say “yes”, and chances are that he will be very confused.

“And a goat.” will usually fail this criterion, because it is so heavily tied* to something else that it cannot stand alone. Usually, this something is the preceding own statement (“Mary had a little lamb.”), and the best solution would be to integrate the two (“Mary had a little lamb and a goat.”) or to complete the missing portions (“Mary had a little lamb. And she had a goat.”). However, there are some cases that can be argued, mostly relating to immediate interactions (spoken word, texting, and similar). Consider e.g. “And a goat.” as an afterthought** to a previous complete thought or as an interjection by a second speaker—and compare it with “Oh, wait, it just occurred to me that I would also like to have a goat.” resp. “I agree with the previous speaker, but would like to add that we should also buy a goat.”, and similar overkill. In contrast, “Mary had a little lamb. And everywhere that Mary went, the lamb was sure to go.” has two separate and independent thoughts, both of which are complete subject–verb sentences, both of which could be taken as stand-alone claims with minimal context. (Except as far as the “And” sends a confusing signal and would be better removed in a stand-alone context; however, the result remains a perfectly valid sentence even in the traditional sense.)

*Interestingly, just “A goat.” is more likely to be a valid thought, because the “And” points to something else that must already have been communicated.

**With sufficient delay that the afterthought cannot be integrated into the whole: If someone is currently writing an essay and sees the sudden need to add a goat to the discussion, there is no justification for “And a goat.”—there is more than enough time to amend the text before publication.

However, in most cases, I would recommend sticking to the traditional “complete sentence” criterion, because it makes a useful proxy and can serve to avoid sloppy mistakes when trying to be clever.

Excursion on full-stops for effect:
Full-stops are often deliberately (mis-)used for e.g. dramatic effect or to imitate the spoken word. For instance, “Mary had a little lamb. And a goat.” might arise in an attempt to put extra emphasis on the latter, to simulate a “dramatic pause”, or similar. I recognize that there is some benefit to this effect—but not to how it is achieved. I strongly recommend using the “m-dash” (“—-”) for such purposes—and do so myself all the time.* To boot, I would strongly advice against striving for a literal pause, seeing that the written and spoken word are not identical in their character. Notably, most proficient readers do not “sound out” the words in such a manner that an intended pause would actually occur.

*To the point that even I cannot deny overuse… Then again, I do not suggest that others change the frequency of their use of the effect, just that they replace one means of achieving it with another. Some might raise objections against this use of the m-dash, e.g. based on historical use for parenthesis; however, I do not use the old semantics, there are other means to achieve a parenthesis effect, and the m-dash is otherwise fairly rare in modern English.

A particularly idiotic use is the insertion of a full-stop after every word, to indicate that each words is heavily emphasized and separated in time, e.g. “Do. Not. Do. This.”: The only situation where this might even be negotiable is when spoken word is to be (pseudo-)transcribed, e.g. as part of a dialogue sequence in a book. For a regular text, including e.g. a post on a blog or in a forum, textual means of emphasis should be used (italicization, underlining, bold type, …)—the written word is not a mere transcription of the spoken.

Excursion on full-stops in long sentences:
I sometimes have the impression that an artificial full-stop has been inserted to prevent a sentence from being too long, by some standard. (Possibly, some journalists write a correct sentence, see it marked as “too long” by a style checker, and just convert a comma to a full-stop to land below the limit. Then again, some journalists appear to use a full-stop as the sole means of interpunctuation, even when length is not a concern…) The result is a completely unnecessary hindrance of the reader: Because valuable hints are now absent or, worse, misleading, it becomes harder to read the sentence. (Note that there is no offsetting help, because the actual thought expressed does not magically become shorter when a few full-stops are inserted.) For instance, when reading the FAZ (roughly, the German equivalent of the New-York Times), I have often encountered a complete sentence of a dozen or more words, followed by “Because”/“Weil” at the beginning of a subordinate clause of another dozen words—and then a full-stop… The result is that I, under the assumption that the grammar is correct, “close” the first sentence, absorb the second with the expectation of applying the causality to a later main clause, and am then thrown entirely off track. I now have to go back to the first sentence, (at least partially) re-read it, make the causal connection, re-think the situation, and then scan forwards to the end of the subordinate clause again, to continue reading. It would have been much, much better to keep the subordinate clause joined by the grammatically correct comma—the more so, the longer the sentences.

Meta-information:
My use of full-stops and capital letters in the above examples is deliberately inconsistent. Mostly, I have tried to avoid them in order to not complicate matters around the resulting double* interpunctuation. However, many examples have required them to be understandable. When it comes to standalone “And” vs “and” (quotation marks included), I have used “And” when it appeared thus in the example, and “and” when speaking of the word more generically.

*Examples like ‘abc “efg.”, hij’ are awkward and can be hard to read. I also categorically reject some outdated rules around interpunctuation and quotes that originated to solve pragmatical problems with equally outdated printing technology.

I found the asymmetry of “Mary had a little lamb and a goat.” a little annoying, and considered adding a “g-word” before “goat”; however, a reasonable “g-word” was hard to find* and some of the later stand-alone examples became awkward.

*The most orthographically and semantically obvious example is “giant”, but it is typically pronounced differently. Other candidates made too little sense.

Written by michaeleriksson

September 21, 2018 at 12:11 am

X began Y-ing

leave a comment »

Disclaimer: I set out to write a text just two or three paragraphs long. I was soon met with a series of grammatical complications and aspects that I had hitherto not considered—and I raise the warning that there could be others that I still have not discovered. However, my main objection is one of style—not grammar. (No matter what impression the text could give: It only takes so long to say “it is ugly”.)

I currently spend more time than usual reading fiction. This leads me to again, and again, and again encounter one of the most ugly formulation patterns in the English language: X began Y-ing.

He began running. She started turning. It commenced raining. Etc.

Not only are they very ugly, they are also potentially misleading, because a Y-ing construct* usually has the implication of something (already) ongoing, as with “John, running, began to tire” or “John began to tire [while] running”**. This is particularly bad with “started”, because “she started turning” could be read as “she experienced a start while turning”. The much sounder construct is “to” and an infinitive—“John began to run” over “John began running”. Indeed, I often find myself suppressing a snarky question of “What did X begin to do, while Y-ing?”, even knowing what was meant. In some cases and contexts, other formulations might be suitable, e.g. “John began with running [to lose weight]” or “John began his running [for the day]”. An entirely different road is also possible, e.g. “John broke into a run”, or “John took up running” (as a smoother alternative for the weight-loser above).

*The main cases usually are participles (or, in a noun context, gerunds). I am uncertain how “Y-ing” in “X began Y-ing” should be classified, especially since it logically fills the role of an infinitive. Conceivably, it is a gerund (cf. an excursion on “stopped” below), which would give it some grammatical justification, but would not reduce its ugliness or potential ambiguity. The matter is complicated by e.g. “John began running slowly”, which would point to a participle, not a gerund. (It might be explained as intending “John slowly began running”, but that would change the meaning.) To boot, the same string of characters can sometimes be interpreted in different roles and meanings in a given sentence—and the gerund–participle division seems very vulnerable to this (but I will ignore such complications in the rest of the text).

**This example is equally ugly and not something that I would recommend (at least not without the “while”). The purpose of the examples is solely to illustrate the potential confusion.

Moreover, even a construct using “began” is often just a waste of space—a simple “John ran” will often do the trick. That he began to do so will often be clear from context, redundant, or simply not interesting in the overall situation. Consider e.g. “John walked along the path.* A bear burst out of the woods and John ran.”: The use of “began to run” (or “began running”) adds nothing but length to the text.

*This sentence makes the issue crystal clear. However, it is not always necessary, because (a) John is more likely to have walked than to have run, and (b) what he did before the encounter with the bear is usually of secondary importance to a work of fiction (but the increased precision might be beneficial in non-fiction). In a pinch, that John was already running could be brought over by “John ran faster”. In other cases, a “began to Y”/“began Y-ing” brings no value at all, as with “John jumped into the water and began to swim”—he was hardly swimming before, so “[…] and swam” is better. The variation “John jumped into the water and began to drown” / “[…] drowned” only sees a significant difference when the event/action/whatnot was not completed, here e.g. because John was rescued. Often the action is so short that its commencement will almost always imply its conclusion—using “she started to turn” over “she turned” is hardly ever justifiable.

My advice: The first attempt should use a single, ordinary verb, e.g. “John ran”. If this does not work in the overall context, go with “began to”, e.g. “John began to run”. Never use “began Y-ing”.

Excursion on “stopped” and similar words:
What about the mirror image “John stopped running”? I consider this formulation more acceptable, but also suboptimal, and would not see it as a justification for e.g. “X started* Y-ing”. This case differs in several regards: Firstly, the absence of strong alternatives. (There is no mirror image to “John ran”**, and “John stopped to run” is both uglier and more ambiguous than “John stopped running”.) Secondly, the lesser ambiguity. Thirdly, being less ugly in my eyes. Fourthly, having a greater grammatical justification, seeing that an interpretation as something ongoing is reasonably compatible (unlike with “start”): “John stopped running” could, if somewhat generously, be seen as “John, currently running, stopped doing so”. (Contrast this with a hypothetical and paradoxical “John, currently running, started doing so”…) Alternatively, an interpretation as gerund is less awkward than above, e.g. as “John stopped [the activity of] running”.***

*For better symmetry with “stop”, I will use “start” in this excursion. The main text mostly uses “began”, because I have seen “began” much more often in the last few days (and likely generally).

**“John stopped” would be a possible solution when only one activity is ongoing, and especially for activities that imply a movement in space (e.g. “running”). However, this will not work generally: For instance, “John sang while walking down the road. Feeling a sneeze coming on, he stopped.” is not unique enough: Did he stop singing, walking, or both? (Note that this ambiguity is more likely to affect the story than whether John ran, walked, or rested before meeting the bear above.)

***Then again, this might be better saved for more ongoing activities, states, whatnot. I would find this formulation less natural with someone who is at this very moment running, and more natural with someone who runs from time to time for exercise. Similarly, “John stopped smoking” would normally imply that he gave up smoking, rather than that he extinguished a cigarette. The same applies to the use of a gerund with “start” (“John started running to lose weight”—not “John started running to escape the bear”). In both cases, a reformulation using “gave up” resp. “took up”, or similar, is beneficial both to reduce ambiguity and to reduce ugliness. (Note that “John took up running” definitely implies a gerund. Also note that “John took up sports” works better than “John began sports”.)

A way out is to avoid “stopped” in favour of e.g. “ceased”: “X ceased to Y” is less problematic than “X stopped to Y”. For the moment, I suggest to either use this way or, when the context allows it, just “X stopped”—never “X stopped Y-ing”.

Constructs like “John continued running” are somewhere between the “start” and “stop” cases: On the one hand, the “ongoing” semi-justification holds similarly to “stop”; on the other, there are alternatives similarly to “start” (“John continued to run” and “John ran”, the latter actually being stronger than for “start”). These alternatives are my recommendation.

A “John continued running” might have some justification with a different intention, as with “John [who was originally walking] continued [now] running [because he saw a bear]”, but here a formulation like “John continued at a run” is usually better.

Excursion on “to … to …”:
A minor potential ugliness when using “to” is variations of “John wanted to begin to run”, where a “to” + infinitive appears repeatedly. The temptation to use “John wanted to begin running” is understandable, but I would recommend a greater restructuring. In the given example, the best solution is usually to just drop “to begin” entirely—“John wanted to run”. Alternatively, something like “John wanted to take up running” works again.

Excursion on other verbs:
My draft contained the following as a backup argument:

Of course, other non-auxiliary* double-verb constructs usually** follow the “to” pattern: “John wanted to run”, not “John wanted running”—conjugated verb, “to”, infinitive verb.

*An auxiliary verb could indeed use Y-ing as a participle, e.g. “John is running”—or use some other variation, e.g. “John must run” (an infinitive without “to”). Generally, some caution must be raised due to the different roles of verbs, which could imply different grammatical rules.

**A potential group of exceptions is those like “stop”, cf. excursion. While no other group of exceptional verbs occur to me, they might exist.

During proof-reading, exceptions like “loved running”, “disliked running”, “ran celebrating”, and my own uses of “took up running” belatedly occurred to me. These make the issue of precedence trickier, and I would rather not do the leg-work on the issue. However, limited to these cases:

“Took up running” is a strict gerund phrase, to the point that it can be disputed whether it is even a double-verb construct. (“Took up sports”, again, works much better than “began sports”, pointing strongly to a verb–noun construct. A gerund is, obviously, a quasi-noun. “Took up to run” is not even a possibility. ) Due to its character, there is also much less room for ambiguity.

“Ran celebrating” serves more to exemplify my objections against “began running” than to conquer them: Here two activities take place simultaneously (running and celebrating) that are not that closely connected. Someone is in a state of celebrating (e.g. having just won a track race) and is running while being in this state (e.g. during a lap of honor). Prior to winning, he was running without celebrating; after the honor lap, he will not be running but still be celebrating. Indeed, “he began, celebrating, to run” shows how awkward a formulation lie “he began celebrating” is. Even when the connection is strong, the modification by the one verb (a participle) is not necessarily on the other verb, but more (or wholly) on the actor in all cases that I can think of at the moment, e.g. “he slept dreaming” (broadly equivalent to “he slept and was dreaming”; and as opposed to “he slept dreamingly”, broadly equivalent to “he slept and did this dreamingly”).

As for “loved running” (ditto “disliked running”), it is usually solidly in the gerund territory and refers to more general activities than e.g. “John began running” typically does, e.g. “John loved running as a means of exercise”. In contrast, even if we allowed “John loved running from the bear” (referring to that one situation), it would make John a bit of a freak—and it could easily be replaced by “John loved to run from the bear”. Then again, I am skeptical to allowing “John loved running from the bear” in the first place: While it is not as ugly and ambiguous* as “John started running”, the gerund** issue arises and the construct brings no additional value over “John loved to run from the bear”.

*But it has some ambiguity: John might e.g. have been filled with love for his wife while running.

**Replacing “running” with “sports” gives us the non-sensical “John loved sports from the bear” speaking against a gerund, while variations like “John loved running speedily from the bear” point to a participle. Can the use be justified if it is not a gerund? Would it not be better to consistently use a “to” + infinitive?

Written by michaeleriksson

September 7, 2018 at 4:36 am

Posted in Uncategorized

Tagged with , , , ,

A call for more (!) discrimination

with 4 comments

The word “discrimination” and its variations (e.g. “to discriminate”) is hopelessly abused and misunderstood in today’s world. Indeed, while properly referring to something (potentially) highly beneficial and much needed, it has come to be a mere short for longer phrases like “sexual discrimination” and “racial discrimination”.* Worse, even these uses are often highly misleading.** Worse yet, the word has increasingly even lost the connection with these special cases, degenerating into a mere ad hominem credibility killer or a blanket term for any unpopular behavior related (or perceived as related) to e.g. race.***

*Note that it is “sexual” and “racial”—not “sexist” and “racist”. The latter two involve ascribing an intention and mentality to someone else, beyond (in almost all cases) what can possibly be known—and is sometimes manifestly false. Further, their focus on the intent rather than the criteria would often make them unsuitable even in the rare cases where the use could otherwise be justified.

**E.g because a discrimination on a contextually rational and reasonable criterion (e.g. GPA for college admissions) indirectly results in differences in group outcome, which are then incorrectly ascribed to e.g. “racial discrimination”. The latter, however, requires that race was the direct criterion for discrimination.

***Including e.g. having non-PC opinions about some group or expressing that opinion, neither of which can in any meaningful sense be considered discrimination—even in cases where the opinion or expression is worthy of disapproval. This including even the (already fundamentally flawed) concept of micro-aggressions.

What then is discrimination? Roughly speaking: The ability to recognize the differences between whatever individuals/objects/phenomena/… are being considered, to recognize the expected effects of decisions involving them, and to act accordingly. Indeed, if I were to restrict the meaning further, it is the “act” part that I would remove…* (Also see a below excursion on the Wiktionary definitions.)

*E.g. in that I would not necessarily consider someone discriminating who flipped a coin and then hired exclusively men or exclusively women based on the outcome—apart from the greater group impact, this is not much different from the entirely undiscriminating hiring by a coin flip per candidate. I might possibly even exclude e.g. the feminist stereotype of a White Old Man who deliberately hires only men because of the perceived inferiority of women: This is, at best, poor discrimination on one level and a proof of a lack of discrimination on another. C.f. below. (While at the same time being a feminist’s prime example of “discrimination” in the distorted sense.)

For instance, deciding to hire or not to hire someone as a physician based on education and whether a license to practice medicine is present, is discrimination. So is requiring a lawyer to have passed a bar exam in order to perform certain tasks. So is requiring a fire fighter to pass certain physical tests. So is using easier tests for women* than for men. So is using health-related criteria to choose between food stuffs. So is buying one horse over another based on quality of teeth or one car over another based on less rust damage. Etc. Even being able to tell the difference between different types of discrimination based on justification and effects could be seen as discrimination!

*This is, specifically, sexual discrimination, which shows that even such special cases can have the blessing of the PC crowd. It also provides an example of why it is wrong to equate “sexual” and “sexist”, because, no matter how misguided this discrimination is, it is unlikely to be rooted in more than a greater wish for equality of outcome. To boot, it is an example of poor discrimination through focus on the wrong criteria or having the wrong priorities. (Is equality of outcome when hiring really more important than the lives of fire victims?!?)

Why do we need it?

Discrimination is very closely related to the ability to make good decisions (arguably, any decision short of flipping a coin)—and the better someone is at discriminating, the better the outcomes tend to be. Note that this is by no means restricted to such obvious cases as hiring decisions based on education. It also involves e.g. seeing small-but-critical differences in cases where an argument, analogy, or whatnot does or does not apply; or being able to tell what criteria are actually relevant to understanding the matter/making the decision at hand.*

*Consider e.g. parts of the discussion in the text that prompted this one; for instance, where to draw the line between speech and action, or the difference between the IOC’s sponsor bans and bans on kneeling football players. Or consider why my statements there about employer’s rights do not, or only partially, extend to colleges: Without a lack of understanding, someone might see the situations as analogous, based e.g. on “it’s their building” or “it’s their organization”. Using other factors, the situation changes radically, e.g. in that the employer pays the employee while the college is paid by the student; that co-workers who do not get along can threaten company profits, while this is only rarely the case with students who do not get along; and that a larger part of the “college experience” overlaps with the students personal life than is, m.m., the case for an employee—especially within the U.S. campus system. (For instance, what characteristic of a college would give it greater rights to restrict free speech in a dorm than a regular landlord in an apartment building? A lecture hall, possibly—a dorm, no.)

Indeed, very many of today’s societal problems and poor political decisions go back, at least partially, to a failure to discriminate resp. to discriminate based on appropriate criteria.

Consider e.g. the common tendency to consider everything relating to “nuclear” or “radioactive” to be automatically evil (or the greater evil): Nuclear power is “evil”, yet fossil energies do far more damage to the world. The nuclear bombings of Japan were “evil”, yet their conventional counter-part killed more people. Radioactive sterilization of food is “evil”, yet science considers it safe—much unlike food poisoning… What if discrimination was not done by name or underlying technology, but rather based on the effects, risks, opportunities?

Consider the (ignorant or deliberate) failure to discriminate between e.g. anti-Islamists and anti-Muslims or immigration critics and xenophobes, treating them the same and severely hindering a civilized public debate.

Consider the failure to discriminate between school children by ability and the enforcing of a “one size fits all” system that barely even fits the average*, and leaves the weakest and strongest as misfits—and which tries to force everyone to at a minimum go through high school (or its local equivalent). (Germany still does a reasonable job, but chances are that this will not last; Sweden was an absolute horror already when I went to school; and the U.S. is a lot worse than it should and could be.)

*Or worse, is so centered on the weakest that school turns into a problem even for the average… Indeed, some claim that e.g. the U.S. “No Child Left Behind Act” has done more harm than good for this very reason.

Consider the failure to discriminate between politicians based on their expected long-term effect on society, rather than the short-term effect on one-self.

Consider the failure to discriminate between mere effort and actual result, especially with regard to political decisions. (Especially in the light of the many politicians who do not merely appear to fail at own discrimination, but actually try to fool the voters through showing that “something is being done”—even should that something be both ineffective and inefficient.)

Consider the failure to discriminate between those who can think for themselves (and rationally, critically, whatnot) and those who can not when it comes to e.g. regulations, the right to vote, self-determination, …

Consider the failure to discriminate between use and abuse, e.g. of alcohol or various performance enhancing drugs. (Or between performance enhancing drugs that are more and less dangerous…)

Consider the undue discrimination between sex crimes (or sexcrimes…) and regular crimes, especially regarding restrictions on due process or reversal of reasonable expectations. (Whether sex is involved is not a valid criterion, seeing that e.g. due process is undermined as soon as any crime is exempt from it.)

Consider the undue discrimination between Israelis and Palestinians by many Westerners, where the one is held to a “Western” standard of behavior and the other is not. (Nationality is not relevant to this aspect of the conflict.)

A particularly interesting example is the classification of people not yet 18 as “children”*, which effectively puts e.g. those aged 3, 10, and 17 on the same level—an often absurd lack of discrimination, considering the enormous differences (be they physical, mental, in terms of experience or world-view, …) between people of these respective ages. Nowhere is this absurdity larger than in that the “child” turns into an “adult” merely through the arrival of a certain date, while being virtually identically the same as the day before—and this accompanied with blanket rights and obligations, with no other test of suitability. Note how this applies equally to someone well-adjusted, intelligent, and precocious as it does to someone who is intellectually over-challenged even by high school and who prefers to lead a life of petty crimes and drug abuse. (To boot, this rapid change of status is highly likely to make the “children” less prepared for adulthood, worsening the situation further.)

*The size of the problem can vary from country to country, however. In e.g. the U.S. there is a fair chance that a word like “minor” will be used, especially in more formal contexts, which at least reduces the mental misassociations; in Sweden, “barn” (“child”) dominates in virtually all contexts, including (at least newer) laws.

However, there are many other problems relating to the grouping of “children” with children, especially concerning undifferentiated societal and political debates around behavior from and towards these “children”. This in particular in the area of sex, where it is not just common to use terms like “pedophile”* and “child-porn” for the entire age-range, but where I have actually repeatedly seen the claim that those sexually attracted to someone even just shy of 18 would be perverts**—despite the age limit being largely arbitrary***, despite that many are at or close to their life-time peak in attractiveness at that age, despite that most of that age are fully sexually mature, and despite that people have married and had children at considerably lower ages for large stretches of human history.

*This word strictly speaking refers to someone interested in pre-pubescent children, making it an abuse of language not covered by the (disputable) justification that can be given to “child-porn” through the wide definition of “child”. Even if the use was semantically sound, however, the extremely different implications would remain, when children and “children” at various ages are considered.

**Presumably, because the classification of someone younger as a “child” has become so ingrained with some weak thinkers that they actually see 18 as a magic limit transcending mere laws, mere biological development, mere maturity (or lack there of), and leaving those aged 17 with more in common with those aged 8 than those aged 18.

***Indeed, the “age of consent” is strictly speaking separate from the “age of maturity”, with e.g. Sweden (15) and Germany (14 or 16, depending on circumstances) having a considerably lower age of consent while keeping the age of maturity at 18.

Not all discrimination, depending on exact meaning implied, is good, but this is usually due to a lack of discrimination. Consider e.g. making a hiring decision between a Jewish high-school drop-out and a Black Ph.D. holder: With only that information present, the hiring decision can be based on either the educational aspect, the race/ethnicity aspect, or a random choice.* If we go by the educational or race aspect, there is discrimination towards the candidates. However, if the race aspect is used, then this is a sign that there has been too little or incorrect discrimination towards the hiring criteria—otherwise the unsuitability of the race aspect as a criterion would have been recognized. This, in turn, is the reason why racial discrimination is almost always wrong: It discriminates by an unsound criterion. We can also clearly see why “discrimination” must not be reduced to the meanings implied by “racial [and whatnot] discrimination”—indeed, someone truly discriminating (adjective) would not have been discriminating (verb) based on race in the above example.

*Or a combination thereof, which I will ignore: Including the combinations has no further illustrative value.

Excursion on proxy criteria:
Making decisions virtually always involves some degree of proxy criteria, because it is impossible to judge e.g. how well an applicant for a job fairs on the true criteria. For instance, the true criterion might amount to “Who gives us the best value for our money?”. This, however, is impossible to know in advance, and the prospective employer resorts to proxy criteria like prior experience, references, education, … that are likely to give a decent, if far from perfect, idea of what to expect. (Indeed, even these criteria are arguably proxies-for-proxies like intelligence, industriousness, conscientiousness, …—and, obviously, the ability to discriminate!)

Unfortunately, sometimes proxies are used that are less likely to give valuable information (e.g. impression from an interview) and/or are “a proxy too far” (e.g. race). To look at the latter, a potential U.S. employer might (correctly) observe that Jews currently tend to have higher grades than Blacks and tend to advance higher in the educational system, and conclude that the Jew is the better choice. However, seeing that this is a group characteristic, it would be much better to look at the actual individual data, removing a spurious proxy: Which of the two candidates does have the better grades and the more advanced education—not who might be expected to do so based on population statistics.

As an aside, one of my main beefs with increasing the number of college graduates (even at the cost of lowering academic standards to let the unsuitable graduate) is that the main role of a diploma was to serve as a proxy for e.g. intelligence and diligence, and that this proxy function is increasingly destroyed. Similarly, the greater infantilization of college students removes the proxy effect for ability to work and think for oneself.

Excursion on discrimination and double standards:
Interestingly, discrimination otherwise rejected, usually relating to the passage of time, is sometimes arbitrarily considered perfectly acceptable and normal. A good example is the age of maturity and variations of “age of X” (cf. above)—a certain age is used as an extremely poor and arbitrary proxy for a set of personal characteristics.

In other cases, such discrimination might have a sufficient contextual justification that it is tolerated or even considered positive. For instance, even a well qualified locker-room attendant of the wrong sex might not be acceptable to the visitors of a public bath, and the bath might then use sex as a hiring criterion. Not allowing men to compete in e.g. the WTA or WNBA can be necessary to give women a reasonable chance at sports success (and excluding women from the ATP or the NBA would then be fair from a symmetry point of view). Etc.

Then there is affirmative action…

Excursion on how to discriminate better:
A few general tips on how to discriminate better: Question whether a criterion is actually relevant, in it self, or is just as proxy, proxy-for-a-proxy, proxy-for-a-proxy-for-a-proxy, …; and try to find a more immediate criterion. Question the effectiveness of criteria (even immediate ones). Do not just look at what two things have in common (e.g. building ownership, cf. above) but what makes them different (e.g. being paid or paying). Try to understand the why and the details of something and question whether your current assumptions on the issue are actually correct—why is X done this way*, why is Y a criterion, why is Z treated differently, … Try to look at issues with reason and impartiality, not emotion or personal sympathy/antipathy; especially, when the issues are personal, involve loved ones or archenemies, concern “pet peeves”, or otherwise are likely to cause a biased reaction.

*The results can be surprising. There is a famous anecdote about a member of the younger generation who wanted to find out why the family recipe for a pot-roast (?) called for cutting off part of it in a certain manner. Many questions later, someone a few generations older, and the origin of the tradition, revealed the truth: She had always done so in order to … make the pot-roast fit into her too small pan. Everyone else did so in the erroneous belief that there was some more significant purpose behind it—even when their pans were larger.

Excursion on when not to discriminate (at all):
There might be instances where potential discrimination, even when based on superficially reasonable grounds, is better not done.

For instance, topics like free speech, especially in a U.S. campus setting, especially with an eye on PC/Leftist/whatnot censorship, feature heavily in my current thoughts and readings. Here we can see an interesting application of discrimination: Some PC/Leftist/whatnot groups selectively (try to) disallow free speech when opinions contrary to theirs are concerned. Now, if someone is convinced that he is right, is that not a reasonable type of discrimination (from his point of view)?

If the goal is to push one’s own opinion through at all cost, then, yes, it is.

Is that enough justification? Only to those who are not just dead certain and lacking in respect for others, but who also are very short-sighted:

Firstly, as I often note, there is always a chance that even those convinced beyond the shadow of a doubt are wrong. (Indeed, those dead certain often turn out to be dead wrong, while those who turn out to be right often were open to doubts.) What if someone silences the opposition, forces public policy to follow a particular scheme without debate, indoctrinates future generations in a one-sided manner, …—and then turns out to be wrong? What if the wrongness is only discovered with a great delay, or not at all, due to the free-speech restrictions? Better then to allow other opinions to be uttered.

Secondly, if the power situation changes, those once censoring can suddenly find themselves censored—especially, when they have themselves established censorship as the state of normality. Better then to have a societal standard that those in power do not censor those out of power.

Thirdly, there is a dangerous overlap between the Niemöller issue and the fellow-traveler fallacy: What if the fellow travelers who jointly condemn their common enemies today, condemn each other tomorrow? (Similarly, it is not at all uncommon for a previously respected member of e.g. the feminist community to be immediately cast out upon saying something “heretic”.) Better then to speak up in defense of the censored now, before it is too late.

Fourthly, exposure to other opinions, dialectic, eclecticism, synthesis, … can all be beneficial for the individual—and almost a necessity when we look at e.g. society as a whole, science, philosophy, … Better then to not forego these benefits.

Fifthly, and possibly most importantly, censorship is not just an infringement of rights against the censored speaker—it is also an infringement of rights against the listeners. If we were (I do not!) to consider the act against the speaker justified (e.g. because he is “evil”, “racist”, “sexist”, or just plainly “wrong”); by what reasoning can this be extended to the listeners? Short of “it’s for their own good” (or, just possibly, “it’s for the greater good”), I can see nothing. We would then rob others of their right to form their own opinions, to expose themselves to new ideas, whatnot, in the name of “their own good”—truly abhorrent. Better then to allow everyone the right to choose freely, both in terms of whom to listen to and what to do with what is heard.

Excursion on failure to discriminate in terminology:
As with the child vs. “child” issue above, there are many problems with (lack of) discrimination that can arise through use of inappropriate words or inconsistent use of words. A very good example is the deliberate abuse of the word “rape” to imply a very loosely and widely defined group of acts, in order to ensure that “statistics” show a great prevalence, combined with a (stated or implied) more stringent use when these “statistics” are presented as an argument for e.g. policy change. Since there is too little discrimination between rape and “rape”, these statistics are grossly misleading. Other examples include not discriminating between the words* “racial” and “racist”, “[anabolic] steroid” and “PED”, “convicted” and “guilty”, …

*Or the concepts: I am uncertain to what degree the common abuse of “racist” for “racial” is based on ignorance of language or genuine confusion about the corresponding concepts. (Or intellectually dishonest rhetoric by those who do know the difference…) Similar remarks can apply elsewhere.

(In a bigger picture, similar problems include e.g. euphemistic self-labeling, as with e.g. “pro-life” and “pro-choice”; derogatory enemy-labeling, e.g. “moonbat” and “wingnut”; and emotionally manipulative labels on others, e.g. the absurd rhetorical misnomer “dreamer” for some illegal aliens. Such cases are usually at most indirectly related to discrimination, however.)

Excursion on Wikipedia and Wiktionary:
Wikipedia, often corrupted by PC editors [1], predictably focuses solely on the misleading special-case meanings in the allegedly main Wikipedia article on discrimination, leaving appropriate use only to alleged special cases… A particular perversity is a separate article on Discrimination in bar exam, which largely ignores the deliberate discriminatory attempt to filter out those unsuited for the bar and focuses on alleged discrimination of Blacks and other ethnicities. Not only does this article obviously fall into the trap of seeing a difference in outcome (on the exam) as proof of differences in opportunity; it also fails to consider that Whites are typically filtered more strongly before* they encounter the bar exam, e.g. through admittance criteria to college often being tougher.**

*Implying that the exam results of e.g. Blacks and Whites are not comparable. As an illustration: Take two parallel school-classes and the task to find all students taller than 6′. The one teacher just sends all the students directly to the official measurement, the other grabs a ruler and only sends those appearing to be taller than 5′ 10”. Of course, a greater proportion of the already filtered students will exceed the 6′ filtering… However, this is proof neither that the members of their class would be taller (in general), nor that the test would favor their class over the other.

**Incidentally, a type of racial discrimination gone wrong: By weakening criteria like SAT success in favor of race, the standard of the student body is lowered without necessarily helping those it intends to help. (According to some, e.g. [2] and [3], with very different perspectives and with a long time between them.) To boot, this type of discrimination appears to hit another minority group, the East-Asians, quite hard. (They do better on the objective criteria than Whites; hence, they, not Whites, are the greater victims.)

Worse, one of its main sources (and the one source that I checked) is an opinion piece from a magazine (i.e. a source acceptable for a blog, but not for an encyclopedia), which is cited in a misleading manner:* Skimming through the opinion piece, the main theses appear to be (a) that the bar exam protects the “insiders” from competition by “outsiders” by ensuring a high entry barrier**, (b) that this strikes the poor*** unduly, and (c) that the bar exam should be abolished.

*Poor use of sources is another thing I criticized in [1].

**This is Economics 101. The only debatable point is whether the advantages offset the disadvantages for society as a whole.

***Indeed, references to minorities appear to merely consider them a special case of the “poor”, quite unlike the Wikipedia article. To boot, from context and time, I suspect that the “minorities” might have been e.g. the Irish rather than the Blacks or the Hispanics…

Wiktionary does a far better job. To quote and briefly discuss the given meanings:

  1. Discernment, the act of discriminating, discerning, distinguishing, noting or perceiving differences between things, with intent to understand rightly and make correct decisions.

    Rightfully placed at the beginning: This is the core meaning, the reason why discrimination is a good thing and something to strive for, and what we should strive to preserve when we use the word.

  2. The act of recognizing the ‘good’ and ‘bad’ in situations and choosing good.

    Mostly a subset of the above meaning, with reservations for the exact meaning of “good”. (But I note that even a moral “good” could be seen as included above.)

  3. The setting apart of a person or group of people in a negative way, as in being discriminated against.

    Here we have something that could be interpreted in the abused sense; however, it too could be seen as a subset of the first item, with some reservation for the formulation “negative way”. Note that e.g. failing to hire someone without a license to practice medicine for a job as a practicing physician would be a good example of the meaning (and would be well within the first item).

  4. (sometimes discrimination against) Distinct treatment of an individual or group to their disadvantage; treatment or consideration based on class or category rather than individual merit; partiality; prejudice; bigotry.

    sexual or racial discrimination

    Only here do we have the abused meaning—and here we see the central flaw: The example provided (“sexual or racial discrimination”) only carries the given meaning (in as far as exceeding the previous item) when combined with a qualifier; dropping such qualifiers leads to the abuse. “Sexual discrimination”, “racial discrimination”, etc., carry such meanings—just “discrimination” does not. This makes it vital never to drop these qualifiers.

    Similarly, not all ships are space ships or steam ships, the existence of the terms “space ship” and “steam ship” notwithstanding; not all forest are rain forests; sea lions are not lions at all and sea monkeys are not even vertebrates; …

    Note that some of the listed meanings only apply when viewed in the overall context of the entire sentence. Bigotry, e.g., can be a cause of discrimination by an irrelevant criterion; however, “sexual discrimination”, etc., is not it self bigotry. Prejudice* can contain sexual discrimination but is in turn a much wider concept.

    *“Prejudice” is also often misunderstood in a potentially harmful manner: A prejudice is not defined by being incorrect—but by being made in advance and without knowing all the relevant facts. For example, it is prejudice to hear that someone plays in the NBA and assume, without further investigation, that he is tall—more often than not (in this case), it is also true.

  5. The quality of being discriminating, acute discernment, specifically in a learning situation; as to show great discrimination in the choice of means.

    Here we return to a broadly correct use, compatible with the first item, but in a different grammatical role (for want of a better formulation).

    I admit to having some doubts as to whether the implied grammatical role makes sense. Can the quality of being discriminating be referred to as “discrimination”? (As opposed to e.g. “showing discrimination”.) Vice versa for the second half.

  6. That which discriminates; mark of distinction, a characteristic.

    The same, but without reservations for grammatical role.

Written by michaeleriksson

August 9, 2018 at 2:08 am