An American Editor

February 26, 2014

On Books: Dictionary of Untranslatables

Over my career as an editor, I have observed that no matter how much I know about language and usage, I know very little. Consequently, I am always on the lookout for books to add to my collection that I can also use in my work as an editor.

Regular readers of An American Editor know that my primary rule when editing is that the message from the author must be unmistakably communicated to the reader. Should there be any possible doubt about the message, then the language used is questionable.

In that light, I have always assumed that certain words that are used in American prose have clear and precise meaning when used to convey an author’s thoughts. In most instances, I, like many editors and readers, failed to consider the broader concepts that certain words convey; I understood, or so I thought, the common, everyday meaning and assumed it was that meaning that the author was using.

Words, however, can be philosophical in the sense that a word can be both specific and can be used as a substitute for a broader, more conceptual perspective. In my early years, I learned, for example, that the Russian word pravda, which was used as the name of a Soviet Russia newspaper (Pravda), was translated as “truth” — read Pravda and learn the truth about what was happening in Russia and the world.

Unambiguous words — truth, vérité, Warheit — are used to translate the word pravda but, as the Dictionary of Untranslatables: A Philosophical Lexicon (Barbara Cassin, editor, Princeton University Press, 2014 [English translation]; originally published in France as Vocabulaire européen des philosophies: Dictionaire des intraduisibles, 2004) notes, pravda also means justice. And the scope of its meaning as truth is limited: according to the Dictionary, “Pravda is never used to designate scientific truth.”

What the Dictionary does is trace the origins, usage, and conceptual meanings of a selection of words that are important in the worlds of literature, philosophy, and politics, yet which are not easy to translate (and sometimes are wholly untranslatable) from one language to another. The Dictionary illustrates that those words that seem translatable, such as pravda, actually have meanings and nuances that are important to understanding the concept of the word, which concept leads to a different definition than the standard translation implies as being the correct definition.

In its exploration of words, the article authors delve into the cross-linguistic and cross-cultural complexities of the words and their meanings. The terms chosen for exploration have had a great influence on thinking over the ages. The Dictionary cites a word’s contextual history and usage to give additional meaning to the discussion.

Consider the entry for “matter of fact, fact of the matter.” The discussion is of the expression “matter of fact,” which is “found in English philosophy, notably Hume.” The discussion dissects the expression in an attempt to establish its origins and meanings. Following a several-page discussion, the article ends with a bibliography. The bibliographies that follow each entry are interesting in their own right.

The idea of the Dictionary is to elucidate the differences the concepts of the included words and expressions have based on the language in which a word or expression is used, both originally and in translation. The languages are Arabic, Basque, Catalan, Danish, English, French, German, Greek (classical and modern), Hebrew, Hungarian, Latin, Polish, Portuguese, Romanian, Russian, and Spanish.

The terms are often transferred from one language to another without change. For example, praxis and polis are used in a variety of languages without translation; they have become part of a second language’s lexicon as if they were original to that language. Other terms are often mistranslated, even if just in the sense that the translation doesn’t express the breadth of the word’s meaning in its original language (e.g., pravda).

The essays make for some interesting reading. Even if a particular word is not one that I would encounter in my daily editing, reading the essays makes me think about the words I do see daily. In other words, not only are the essays interesting in what they have to say about a particular word’s origins and meanings, but they help reshape my approach to words as an editor.

The Dictionary of Untranslatables is wonderful addition to my language library. I view the Dictionary in the same light I view Steven Pinker’s books on language: not as resource that I will daily open as I would my Webster’s Collegiate, but as a book to savor and think about and to learn in the broader sense of learning. For anyone interested in language, in words, and the scope of meaning that a word can encompass, I recommend the Dictionary of Untranslatables.

If you would like to see a sample entry, Princeton University Press offers a few samples. This link will take you to the page where you can view online, in PDF format, a few entries. You might find the kitsch entry particularly interesting.

February 12, 2014

To Serial or Not to Serial?

One thing I have noticed over the years is that what was once controversial in editing comes back to be controversial again. Like the cycle of life, editorial controversies are never put to permanent rest.

The current resurrected argument is whether or not to use serial commas.

My first thought was “what difference does it make whether serial or nonserial rules the manuscript?” My second thought was “what is the primary task of an editor and how does that task mix with the to-serial-or-not-to-serial question?”

We probably should begin with composition, because that is where this controversy has its origins. The more characters there are in a manuscript, the longer the manuscript. If “unnecessary” characters can be omitted, space will be saved and the cost of production will decline. It might not matter greatly if only one copy is being published, but multiply the savings over thousands of copies and over many manuscripts, and the savings become significant. Welcome to the age of bean counting.

(This attempt to save money is also at the foundation for the notion that there is no space on either side of a dash. But I digress….)

What is the primary reason to have a manuscript edited? I see the primary purpose as clear communication. What is the primary purpose of punctuation? To afford the reader clues as to the message the author intends to convey.

Consider this phrase: pregnant women and children. A professional editor would not let such a phrase stand. Why? Because it is not clear whether both the women and the children are pregnant or just the women. Of course, many arguments can be made as to how pregnant does not modify children, but there only needs to be one argument that it does to make the phrase questionable.

Similarly, as Lynne Truss famously pointed out, a professional editor would not let the phrase “eats shoots and leaves” stand without querying it. Whereas in the “pregnant women and children” instance rewording for clarification is the appropriate path, in the “eats shoots and leaves” conundrum, the correct path is punctuation.

Yet how much punctuation? If the intended meaning is that the actor “eats” some food, then “shoots” another actor and “leaves ” the premises, then serial commas are needed: eats, shoots, and leaves. With the serial commas, there is no mistaking the meaning. But those who oppose serialization would prefer a single comma: eats, shoots and leaves.

How clear is the single comma version? Not at all. There are two vibrant possibilities: the actor “eats” and what the actor is eating is “shoots and leaves”; the actor “eats,” then “shoots” another actor and “leaves” the premises. How is the reader to know which is meant?

Clearly there are a multitude of ways to avoid this situation (e.g., “eats bamboo shoots and leaves”) but the question under consideration is serialization. The premise of the antiserialists is that excessive punctuation interferes with the reading flow, thus minimizing the amount of punctuation enhances the reading experience. Proserialists, on the other hand, see punctuation as necessary to ensure understanding and thus as an enhancer of reading flow because the reader does not have constantly stop and attempt to discern what the author intended.

I admit that I fall in the proserialist camp. I see the role of punctuation as the same as highway signage — I need enough of it so that I do not need to stop in the middle of the highway to think about whether to turn right or left.

Editing is about comprehension, not about saving space. Editing is intended to laser focus on author meaning, not on fulfilling the latest lexical fashion. Serial commas rarely mislead a reader, unlike absence of a serial comma. So what harm comes about by serializing? A professional editor’s goal is to make the reading experience so smooth that the reader absorbs the author’s message without consciously realizing she is doing so.

Sufficient punctuation is one of the tools that brings this about. Insufficient punctuation requires a reader to either stop and attempt to decipher the author’s meaning or to gloss over the author’s point in hopes that either the point was not critical or that it will become clear subsequently. But having a reader battle with insufficient punctuation is not in either the author’s or the reader’s interest.

In the case of “eats, shoots and leaves,” the reader either inadvertently draws the correct conclusion or stops to ponder what is meant. What is the negative to the serial usage, assuming the intended meaning is “eats, shoots, and leaves?” There is none and there rarely is a problem using the serial comma (assuming its use conveys the correct meaning). So why have a rule that insists that the serial comma be avoided whenever possible? Why not make the rule always use the serial comma?

I am convinced that the rationale for the avoidance rule has nothing to do with communication, understanding, readability, or any of the other metrics that a professional editor should be concerned with when editing. I believe that it is an accountant’s rule: Omitting the “excess” punctuation lowers the financial outlay for a manuscript. The accountant’s rule does not address any of the metrics that might cause a manuscript to succeed or fail in the marketplace; instead, it laser focuses on cost.

Yet it strikes me that the cost of misunderstanding, of missing the author’s message is far greater than the financial cost of serializing. If readers have to struggle to understand an author, reviews and recommendations are likely to be negative and thus decrease sales. Ease of reading and understanding cannot be divorced from the decision to serialize or not serialize. The professional editor does not work with absolute rules. For the professional editor, all rules bow to the one rule regarding comprehension, and all rules (except that of comprehension) are flexible.

The difference between a professional editor and a nonprofessional editor lies in the rigidity with which the editor applies the “rules” laid out by style manuals and third parties. The more professional the editor, the more the editor determines for herself what the appropriate rules are that govern a particular project, even if it means explaining to a client why a client’s “rule” is being ignored.

Do you agree?

Richard Adin, An American Editor

December 23, 2013

Faux Controversies and the Singular Plural

On another forum it was asked whether authors should “push the grammar envelope” and embrace the singular plural. I think the wrong question is being asked when you ask whether authors should push the grammar envelope for two reasons: First, because it ignores the purpose of grammar, which is to ensure that there is communication between author and reader. Second, because to push the grammar envelope assumes that there are firm rules to be pushed. The first reason far outweighs the second, but neither is ignorable.

Regarding the singular plural, it is neither pushing the envelope to use it nor a violation of a firm rule nor a distraction from communication (in most cases; there are cases in which it is clearly wrong because its use is confusing). In other words, I think that editors, writers, grammarians, usage gurus, etc., make the proverbial mountain out of the molehill when they oppose the singular plural.

Consider what makes a great editor. A great editor is someone who ensures that a reader understands the editor’s author; that is, ensures that the reader does not leave the book thinking the author is in favor of, for example, genocide, when the author intends the contrary. An average editor can cite chapter and verse of why x is not to be done, but cannot explain why doing x makes the author’s point unintelligible. The amateur editor either blindly accepts the singular plural or remembers having been taught that the singular plural is incorrect and thus blindly changes it.

However, if the singular plural is incorrect, it is incorrect because it makes the author’s point unintelligible, not because a group of self-appointed grammarians have written that it is wrong.

English is difficult enough without making it impossible. Editors constantly twist and turn to apply “rules” of grammar in the mistaken belief that there are rules of grammar. What are too often called rules are really current conventions.

Be clear that I am not referring to spelling and whether the correct choice in context is “rain,” “reign,” or “rein.” Equating spelling with grammar is another common mistake; spelling and grammar are companions, not a single entity.

English lacks the singular plural pronoun. In my schooldays, it was easy to lose points on an otherwise brilliant essay by using the plural pronoun as a singular pronoun. The convention (i.e., “rule”) was that the singular plural was forbidden. Instead, you were expected to rewrite the sentence to avoid the singular plural, even if it meant twisting and turning an otherwise coherent statement into a convoluted mess. Style was more important than substance.

Today’s argument between propluralists and antipluralists amounts to both a faux argument and making style more important than substance. This is not to say that the singular plural is always correct or that a particular sentence could not be made better by avoiding the singular plural. Rather, it is to say that when arguing over the singular plural, we lose sight of what really is important: How well does the sentence communicate to the reader?

The difference between editors, especially between the professional editor and the nonprofessional editor, is the emphasis each places on evaluating each word and sentence on their ability to communicate the point accurately to the reader. Because we use the singular plural in common speech and understand it in context, there should not be a problem in using it in writing when its use eases communication.

I suppose this controversy is just another in the grammar wars between traditionalists and modernists. Bryan Garner (Modern American Usage 3rd ed.) falls into the traditionalist camp. He sees the rise of the singular plural as an attempt to avoid sexism (which it is). As he writes, “It is the most convenient solution to the single biggest problem in sexist language — the generic masculine [also, I would say, feminine] pronoun” (p. 179). His answer is to avoid it whenever possible.

Modernists tend to think in unisexual terms; that is, if it can be applied to both males and females, we need to avoid picking one as the example. Thus the use of the singular plural. Over the past 50 years, as a result of the cultural war on sexism, English speakers have become so accustomed to the singular plural as a “normal” part of speech, it seems foolish to make all possible effort to avoid the construction.

In many ways, this faux controversy reminds me of the split infinitive “rule” and the twisting and turning we had to put language through to avoid splitting the infinitive. Had we instead focused on the communication aspects, we would have recognized that rigid application of the splitting rule was wasteful and illogical. That same recognition should be extended to the singular plural. We should recognize the limitations of English as a language and compensate for those limitations in the most logical manner, as long as clear communication is not jeopardized.

Which brings us back to what I consider the fundamental rule, the fundamental arbiter of grammar: Does use of the singular plural detract from clear communication to the reader? If it doesn’t detract from clear communication, then leave it be as long as it is otherwise properly used.

Editors need to remember that language is fluid. They also need to remember that there really are no rigid rules of grammar except the rule of clarity. Grammar rules, with the clarity exception, are merely conventions or suggestions upon which a large group of society have agreed. They are not intended, except by the fanatical few, to be blindly adhered to and applied. Garner says to use the singular plural cautiously “because some people may doubt your literacy” (p. 179), but I think use of the singular plural is so common today that very few would raise the question. As long as the material is clear, I see little strength to the argument to studiously avoid the singular plural. If the material can be made clearer by avoiding the singular plural, then it is the obligation of the editor to do so. Otherwise, relax and flow with its use.

December 9, 2013

The Miseducation of the Next Generation

Filed under: Editorial Matters,On Language — Rich Adin @ 4:00 am
Tags: , , , ,

When I was in elementary school in the 1950s, as part of the language learning experience we read the New York Times. I still remember the very first lesson, which was devoted to teaching us how to fold the Times so that it was both holdable and readable. Every school day time was devoted to reading something in the Times.

The teacher assigned one article that everyone had to read and then we were free to pick another article that interested us. The reading was followed by a discussion, not only of the content of the article we all had to read, but of the grammar. We also had to mark words that were unfamiliar, look them up in the dictionary, rewrite the dictionary definition in our own words, and then write five sentences that used the word. The teacher collected those words and found ways to incorporate them into our other classwork.

The Times was a teaching tool. It taught grammar and spelling; it made us aware of the world around us; it taught us to read something other than the dime novels that were surreptitiously passed around for their “eroticism” (which were, by today’s standards, not even worthy of the label “erotic” but were great treasures to us). The Times was admired by teachers for its “literary” quality.

Just as generations change, so did teaching change and so did the Times change. By the time my children were in elementary school, the practice of daily reading of a newspaper had disappeared. Teaching had changed as a profession, but more importantly, newspapers had changed. Copyediting of articles was in the decline; where once there were very few grammar and spelling errors in a newspaper, now they were plentiful, with some newspapers much worse than others.

In addition, the 1960s brought about a philosophical shift. If a newspaper was going to be used in the classroom, it was more likely to be the New York Post or the New York Daily News (or similar paper) than it was the New York Times or the Herald Tribune. Schools became more politically nuanced.

The decline in newspaper reading mirrored a decline in time and effort spent learning the fundamentals of good written and verbal communication. In my school days, we had two languages: the more formal, proper, “good” English that was to be used in the classroom, when talking with adults, and when writing, and the informal street language that was used to communicate with peers. Schools enforced the separation and focused on teaching us to master the former; the latter was strictly for use off school grounds and among peers. Even parents insisted on the more formal language usage at home. But this changed with the next generation.

When my children were in school the two heretofore separate languages became one. As my children rose in grades and the teachers became younger, I noted that even the teachers didn’t separate the languages. We had moved to the era of a single language. Trying to enforce the separation at home was impossible because the children had little exposure to the more formal language. And with this change, came the demise of what had been the method of teaching language in my school days.

Part of this change is a result of changes newspapers instituted in order to better meet shareholder and Wall Street demands. Editing has always been invisible and doesn’t become visible in its worst forms until after the product is bought. There are no recalls for poor spelling or grammar; there are no refunds. Consequently, editorial staff reductions could be made with impunity, unlike writing staff reductions.

Where once newspapers could be held up as the everyman’s grammar, spelling, and usage guide, they no longer can. Newspapers were once inexpensive, current, daily relevant language guides for young students; today they cannot be held up as examples of good language. Consider this quote from a recent op-ed piece in my local newspaper:

Some folks balk at public financing of campaigns, but if we think that taxpayer dollars are not already being expended and public funds grossly wasted in our current pay-to-play system, we are fooling themselves.

In the issue that this quote ran, I found a dozen similar errors. If newspapers “speak” like this, is it any wonder that people speak and write like this? Websites are no better.

In the beginning, websites were written with care. Then came the need to get a website up quickly and worry about errors later. Websites were followed by short messages (think Twitter) that require compressing as much as possible into as little as possible.

In all of these instances, language skills changed and the messenger services lost the mantle being language teachers. And this is where the next generation is being miseducated: There no longer is an inexpensive, ubiquitous, broadly recognized teacher of language. In my elementary school days, every school district had access to, and most took advantage of, very inexpensive school subscriptions to the Times, which was accompanied by teaching guides. (I remember paying 25¢ a week for the Times and taking it home with me for my parents to read.) The Times was recognized for its language quality and thus was a teaching tool.

Today’s students and tomorrow’s students are not being similarly exposed to correct grammar and usage because there is no broadly recognized language teacher. I see the effects of this change in the manuscripts I edit, in the job applications I receive, in the tests job applicants submit and I review. Our profession’s future may be less than glorious as our ranks fill with editors who need remedial language education themselves. That there may not be anyone capable of providing that remedial education is also a concern.

What, you may be asking, has brought about this doom and gloom view. The answer, I am sorry to report, is an application I received from a veteran (9 years) English teacher who was looking to supplement her income by doing some freelance editing. She misused, as examples, “your” and “there.” When I pointed this out, her reply was, “You understood me, didn’t you? That should be the criteria.” (I didn’t point out that it is criterion, not criteria.)

Perhaps she has it right. What difference does it make if it is “there” or “their” as long as the message is understood? No, she is wrong, because knowing the difference between the two words is part of understanding the message. If I didn’t know what the correct word was, I might not recognize the message’s meaning.

I see the demise of proper language in newspapers as a reflection of the demise of understanding grammar and spelling in the halls of academia. Do you see it that way, too?

November 11, 2013

Four Questions & Jargon

Every editor has to deal with jargon, because every form of writing has jargon designed to speak to the author’s audience. The question that editors need to resolve is this: Should I delete jargon? Today’s guest essayist, Erin Brenner, tackles the question by asking four questions about the jargon and its use.

Erin Brenner is the editor of the Copyediting newsletter and the owner of Right Touch Editing. You can follow her on Twitter. Erin is a guest presenter at various conferences on topics of interest to freelancers.

_________________

Before Deleting Jargon, Ask These Four Questions

by Erin Brenner

Copyeditors are trained to spot jargon. We’re taught to see it as obscuring meaning, as something designed to keep readers out, so delete it we must. Yet jargon can be helpful as well. For those familiar with it, jargon can provide a concise way to say something.

Instead of automatically deleting jargon, we should be considering whether it’s helpful to the reader.

How do you do that? Let me show you.

Ask Four Simple Questions

While editing the newsletter recently, I came across three jargon terms in an article about email discussion lists by Katharine O’Moore-Klopf: listmate, onlist, and offlist. I’ve been participating in such discussion lists for a long time, so I didn’t even blink at the terms.

But one of my copyeditors, Andy Johnson, did. Given that the article is aimed at folks who don’t currently participate in discussion lists or boards, would those readers understand the terms?

Johnson knows that jargon must be helpful to readers or be removed—just like every other word in a manuscript. To determine whether the terms are helpful, I apply a list of questions I wrote for deciding whether to use a neologism in a manuscript:

  • Does the word in question mean what the author intends it to mean?
  • Does the word fit the style and tone of the text?
  • Will any connotations of the word inhibit the author’s intended message?
  • Will the audience understand what the author means by this word?

In this situation, O’Moore-Klopf was using the terms correctly. They fit well within the piece, and there were no connotations of the terms to inhibit meaning. The problem was whether the audience would understand the terms. The article was targeting readers who don’t currently participate in discussion lists, remember.

We could determine whether there would be readers who wouldn’t understand the jargon by determining how well known the jargon was outside of discussion lists. I started my search for these words:

  1. In several dictionaries. No results.
  2. In the Corpus of Contemporary American English. Zip.
  3. On Google. Now I had a few results:

a. listmate: There is some list management software called ListMate that grabs most of the first few results pages; about four pages in, I found one result that was in the subject line of a Yahoo Group message.

b. onlist: Results included social media handles and program commands. They also included some descriptions of activity on a list (as opposed to off it).

c. offlist: This term had the best results. It appears in Wiktionary, a Minecraft forum, and discussion comments. It’s also as a tag on Instagram and Pinterest.

By this point, I had a fair idea that these terms weren’t used much, if at all, outside of discussion lists. Still, I checked one more place, Google Books, and came up with a few accurate results. Most were books about discussion lists or marketing; one was on relationships in the digital world. Another was a book about language usage in a specific culture, while another was a fiction book set in the modern day.

Given that the terms were not often found in the mainstream, I hesitated to use them. Listmate seemed self-explanatory once the idea of discussion list had been introduced. Onlist and offlist were less clear; although our readers are intelligent and could parse out the meaning, we’d be making them work for it. Could we say the same thing without those terms?

We could. In most places where onlist was on its own, we could just drop it; the context was clear already:

If you have a funny work-related anecdote that you can share [deleted: onlist] without violating anyone’s privacy or embarrassing anyone, do so.

Offlist didn’t appear on its own, but when paired with onlist, we rewrote the phrasing:

Avoid complaining, either on the list or in private e-mail conversations [was: onlist or offlist], about colleagues, listmates, and clients.

In addition to the places I noted above, consider checking industry publications similar to yours and a news database, such as Google News. The first will tell you if the jargon is common in your industry, and the second will tell you if the jargon has become familiar to a wider audience. If the jargon appears in either place, you can feel comfortable keeping it in your manuscript.

Copyeditors don’t have to spend a great deal of time trying to determine how mainstream a piece of jargon is. It took me about 10 minutes or so to research all three terms in a light way and decide that the issue my copyeditor had raised was valid. I saw enough evidence for me to advise the author and seek her preferences.

_________________

Do you  have “rules” that you apply to determine whether jargon should be deleted? Are they the same as Erin’s four questions or something different? Some professional editors work in niche subject areas, for example, medical books written by doctors for doctors or computer programming books written by programmers for advanced-level programmers. Are the rules about jargon and the questions to be asked about jargon outlined by Erin applicable?

Perhaps most important: Does eliminating jargon really matter in today’s Internet and Twitter age?

What do you think?

November 6, 2013

On Language: Are There Rules?

A colleague wrote on another forum: “Yet these [rules of grammar] are elements of correct use of language (and key in quality editing/writing).”

What is correct use of language is arbitrary. “You and I” can be as correct as “you and me” — it just depends on the dominant grammar trend at the time of usage. Grammar “rules” are simply conventions that some self-appointed group of “authorities” has determined reflect current values in expression, which values many current writers and editors accept and agree with. If that were not true, then the rules today would be identical to the rules of 500 years ago and would be immutable, yet we know that grammar rules are always in a state of flux.

I am of the opinion that there is only one true grammar rule: The manner in which something is spoken or written must be such that the listener or reader can make no mistake about the speaker/writer’s intent and meaning. Aside from that, all so-called rules of grammar are here-today-gone-tomorrow rules of consensus.

Consider style manuals and usage manuals. If rules were universal and permanent, there would be no need for more than a single style manual, usage guide, or even dictionary, as there could be no difference and no room for interpretation. Yet we have many of each, and each has differences from the others.

Consider this example: “early rising people” versus “early-rising people.” Which is grammatically correct depends on which style manual, grammar book, and usage manual one looks at and applies. Or, better yet, consider the serial comma — now abandoned in British and Canadian English, except for when it would enhance clarity, and under assault in American English. Yet for decades, using the serial comma was the rule (and one that I think should be kept because its use improves clarity).

Correct use of language is neither a black-and-white proposition nor written in stone. Rather, it is more like silly putty.

The sibling proposition is that “to break the rules [of grammar], you must first know the rules.” In a sense, that proposition is true. But with the rules being in a state of  flux, it is difficult to nail them down so that one can know what rule one is breaking. I think that perhaps the rules being broken are less rules of grammar than they are rules of current consensus and spelling.

In discussions with colleagues, I have noted that when the talk gets to grammar, the discussion really becomes one of word choice. Grammar is the structure of sentence, word choice (and its companion spelling) is using the correct word spelled correctly (e.g., taught vs. taut). Yet sentence structure isn’t rigidly defined even though some grammarians would have us believe otherwise.

Sentence structure, like most things in editing, really revolves around understanding, communication, and clarity: Is the sentence written so that a reader can understand it? Does the sentence communicate the message the author wants to communicate or is it communicating a different message, even if only to a few readers? Is the sentence so clear that there is no possibility that a reader will misinterpret the sentence and what is being communicated by the author?

Rules of grammar are intended to promote those three principles without becoming so inflexible that either the meaning or the drama of the sentence is lost. How sterile is “to go boldly” compared to “to boldly go”? Ultimately, the rules of grammar boil down to this question: What is in charge?

If rules are in charge, then there is no room for flexibility; either the rule is met and satisfied or it is ignored and broken (consider, e.g., the “rule” against splitting infinitives). If the rule is ignored and broken, and there is no effective mechanism for enforcing compliance with it, then it is not a rule; at most, it is a suggestion based on past experience that has been created by a self-selected group. When was the last time you nominated and voted for someone to be part of the grammar rule-making board?

Today, we know that the rule against splitting infinitives was a misguided attempt to squeeze American English into a mold into which it could not fit. Yet the attempt lasted for decades. I remember losing points on essays in high school for not adhering to that rule. Not one English teacher questioned the rule or its soundness; every one enforced it by lowering a paper’s grade. Yet, inexorably, the rule met its death because it could not be enforced outside the classroom. Consequently, one must question whether it ever was really a rule with willy-nilly enforcement or just a suggestion.

Today’s rule in opposition to the serial comma is similar to the split-infinitive rule. Is it more deceptive to the reader to have the extra comma than to forgo it? What harm does the inclusion of the comma cause? Even in an economic sense, the cost of the serial comma probably doesn’t amount to even pennies on a print run.

The movement is afoot to minimize punctuation. The trend began in British English, which is where the trend to do away with the apostrophe seems to have also been born and taken root, and has spread. But the rule is not much of rule because it has the clarity exception: If clarity is improved, keep the serial comma.

The importance of this recently surfaced in a discussion I had with a client. My client complained that in a book that was to follow Canadian English, we used the serial comma, and Canadian English does not use the serial comma. As I noted to the client, such a broad statement is wrong. Canadian English would prefer not to use the serial comma, but accepts it where it enhances clarity. So, I asked my client, who decides the issue of clarity? The answer is the editor initially and the reader thereafter. Consequently, if the editor decides to include the serial comma, it is not wrong. “Which,” I asked, “is clearer: eats shoots and leaves or eats, shoots and leaves or eats, shoots, and leaves?”, making reference to Lynne Truss’ excellent book. “The answer,” I wrote, “depends on which is meant and that between the second and third option, the addition of the final comma makes a world of difference in clarity.”

My client, as is the right of clients, was unimpressed and instructed that Canadian English does not approve of the serial comma and, therefore, we were not to use the serial comma. To the client, this was a rule of grammar, and as a rule, not to be violated.

As editors, we fail our clients and public by referring to rules of grammar rather than to grammar suggestions. Today’s “rules” of grammar are simply reflections of today’s language fads. Tomorrow, different rules will come about that abrogate the former rules. Although I have yet to succeed, I continue to try to educate clients that there are no immutable rules of grammar except for the three principles of understanding, communication, and clarity. If those three principles are met, then the rules of grammar have been satisfied and how we structure the text to meet those three principles using grammar suggestions makes the text more conform or less conform to current grammar suggestions.

October 21, 2013

To Hyphenate or Not to Hyphenate?

Recently, in editing my essays for my forthcoming book, The Business of Editing: Effective and Efficient Ways to Think, Work, and Prosper (ISBN 978-1-4341-0369-7; Waking Lion Press; 2014), Ruth Thaler-Carter raised this question:

“Shouldn’t custom built locally be custom-built locally?”

There are three editors on this project — Ruth, myself, and Jack Lyon — which has meant there have been some lively language discussions and this was another such discussion. The opinion was split 2-1 in favor of hyphenation. I was the dissenting opinion and so won the battle as the author and final decider, but that doesn’t mean my decision was the grammatically right decision; it just means that as the named author I had final decision-making power and exercised it.

If you lookup “custom built” in the dictionary (The American Heritage Dictionary of the English Language [5th ed] and the Merriam-Webster’s Collegiate Dictionary [11th ed]), you find the entry hyphenated followed by “adj.” It is those three letters that cause the problem.

I agree that custom built needs to be hyphenated in an adjectival phrase, such as custom-built computer. But when not used in an adjectival phrase, as in “custom built locally,” I see no reason to hyphenate. What does hyphenation accomplish? Is a reader misled in the absence of the hyphenation? Is “custom built locally” more understandable when hyphenated and, conversely, less understandable when not hyphenated?

This is similar to the questions raised by short term and long term. A look at the dictionaries indicates that these are also adjectives and hyphenated. But there is no mention of when they are not adjectives. For example, “When the short term expires, payment will be due.”

Editors rely on dictionaries and other usage tomes for guidance — and so editors should. But the emphasis has to be on guidance. Editors are supposed to consider, evaluate, and exercise judgment with the ultimate goal of ensuring that the reader understands the author.

So the question arises: Do phrases that are hyphenated when used as adjectives continue to be hyphenated when not used in adjectival form? (Yes, I recognize that there are other forms in which the hyphenated version is needed or required, including in certain noun situations; let’s ignore those situations and look toward a more general rule.)

(Let me make clear that editors have and should have differences of opinion about such matters of grammar as hyphenation. Regardless as to how we ultimately “resolve” today’s question, there is no absolute right or wrong. Rather, we seek a guiding rule. Ultimately, it is my belief that a professional editor can and should make decisions, such as whether to hyphenate or not, based on whether the editor can support the decision.)

Perhaps a good phrase to evaluate is decision making. I raise it because it does not appear in the dictionary yet whatever rule we generate would be as applicable to decision making as to short term and custom built. I suspect that we would all agree that in this instance, decision making should be hyphenated: “In the decision-making process, …” But should it be hyphenated in this usage: “It is clear that the decision making was faulty.” In this latter sentence, the absent but implied word is “process.” Is implication sufficient to warrant hyphenation?

Or what about these pairs: “Betty was the decision maker” versus “the decision-maker Betty”? In the former, the modifier precedes the phrase; in the latter it follows on its heels. The latter is clear that hyphenation is warranted; not so in the former.

In the end, I fall back on my “rule” that what governs is clarity. If hyphenation will make the meaning clearer, then hyphenate; if it neither enhances nor decreases clarity, then don’t hyphenate. I do not stand alone in this view. The Chicago Manual of Style (16th ed., §7.85 for those who require “authority”) says:

“In general, Chicago prefers a spare hyphenation style: if no suitable example or analogy can be found either in this section or in the dictionary, hyphenate only if doing so will aid readability.”

The problem with Chicago‘s guidance is that it still leaves us in the dark whether to hyphenate short term, long term, decision making, and custom built — unless we latch onto the final clause, “hyphenate only if doing so will aid readability,” which is what I use to make my decision. In the case of decision making, I can also latch on to the noun + gerund examples Chicago provides in the table that accompanies §7.85, where Chicago specifically says “decision making” and “decision-making group.”

On the one hand, it strikes me that short term, long term, and custom built should be no different than decision making. On the other hand, however, it seems that in the case of these three phrases, the fact that the dictionaries hyphenate them is sufficient fallback justification to hyphenate them (even though they classify the hyphenated form as adjectival). I prefer, however, to base my decision on what counts most: readability.

Do you hyphenate? What is your justification for doing/not doing so?

October 16, 2013

On Language: What Did He Feel When He Felt?

Filed under: On Language — Rich Adin @ 4:00 am
Tags: , , ,

As an editor, I am concerned with how words are used. It is not that I do not fall into the informal usage trap myself; rather, it is informality has its place and whether or not to accept informal usage depends on context and audience.

Consider the use and misuse of felt and feel as substitutes for thought and think. When I read “he felt,” the first question that comes to mind is, “What did he feel with his hands?” Usually that question is quickly dismissed because the context clearly indicates that what was meant was “he thought.” But if you mean think/thought, why not use think/thought?

I view feel as a weak version of think, almost as an indicator that the person who is feeling isn’t really thinking, but is doing something else which is not explained. Unfortunately, feel replaces think in most authors’ writing.

Some types of writing are less formal than others and can withstand the substitution of feel for think. Feel is particularly apt in dialogue, because dialogue mimics how we speak and most speech is informal. Speech (and thus dialogue) can turn toward informality because it relies on other clues to spread the message.

In contrast, more formal writing, especially science, technical, and medical writing, relies on word choice to both convey and firm the message. There are no gestures that accompany the writing that serve to enforce intent and meaning. Consequently, in more formal writing, the difference between feel and think is important.

When I read in a medical text, for example, that the author feels something is true, I interpret that as meaning the author hopes it is true but only has vague knowledge regarding whether it is true. As always, the ultimate question boils down to one of clarity: Does the use of feel accurately convey what is meant?

The argument can be made that feel has become synonymous with think, both in meaning and strength, in today’s usage. Yet, I’m not convinced that is true. Besides, feel serves other meanings. Consider the sentence, “Jim feels blue today.” We cannot substitute thinks for feels (“Jim thinks blue today”) because doing so completely changes the meaning.

Similarly, if the sentence is “Jim thinks today is Tuesday,” substituting feels for thinks would change the sense of the sentence, if not the meaning. It also would raise a variation of the question with which we started: How does one feel that today is Tuesday?

The usual answer is to see what the dictionary says. Both The American Heritage Dictionary of the English Language (5th ed.) and Merrriam-Webster’s Collegiate Dictionary (11th ed.) include think and believe as meanings for feel, although not as one of the top two definitions (they are fifth and fourth, respectively). Both dictionaries are reflecting a common usage; that is, both are taking a descriptive approach rather than a prescriptive approach to word usage.

That these dictionaries include such a definition invites continuing substitution of feel for think and believe. If the role of the editor is to help an author make crystal clear what the author intends to say, then recognizing that think and believe are not among the first two definitions of feel should lead an editor to use the correct word and not just accept that common usage has devolved so far that at some level feel, think, and believe are wholly synonymous when, in fact, there are meaningful shades of difference.

Authors, too, need to think about use of feel when they mean think. Granted, in fiction the rules are looser and there is less compulsion to choose between feel and think, but even with that looseness, the choice should be thought about. Choosing the right word can be the difference between humdrum and forceful writing.

Feel invokes a sense of intuitiveness whereas think invokes a sense of decisiveness. Consequently, the choice of word imparts the author’s sense of authority. Feel sends a message of less authoritativeness, whereas think sends a message of greater authoritativeness.

In formal writing that sense of authoritativeness is important. Readers want to believe that the author of a medical tome knows what he is writing about or that when an historian draws a conclusion that she is basing it on the strength of facts, not a feeling. The choice of word makes a difference in the strength of the message conveyed to the reader.

The message strength is less important in fiction than in nonfiction, but that doesn’t mean it should be discounted. As has been discussed many times on An American Editor, clarity of meaning and intent is as important in fiction as in nonfiction. Consequently, all authors and editors need to think about word choice, with the understanding that greater laxity is permitted in fiction than in nonfiction.

If there is a concern that by changing a word the editor may also be changing the author’s tone, then the correct course is to query. In this instance, the query should not only be about the word choice, but should also inquire about whether the tone presented is the tone the author wants presented considering the subject matter and audience.

Do you change or query weak word choices that could be made stronger?

September 11, 2013

Here Today, Gone Tomorrow

In the land of word resources, one stands above them all: The Oxford English Dictionary. Why? Because once in the OED, always in the OED.

Alas, the same cannot be said for the dictionaries and usage manuals most editors rely upon. Each edition of Merriam-Webster’s Collegiate Dictionary runs about the same page length and uses about the same size typeface, and is about the same thickness as previous editions. The only way this could occur is if some words got dropped as new words were added.

In olden days, I kept all my “outdated” dictionaries, largely because I liked books and couldn’t bear to part with a book. But after getting estimates to move books across country (several times), I realized that the heavyweights that I no longer ever opened needed to go. And so they did — a move that I regretted once I settled down and knew that any further moves would be local.

“Outdated” dictionaries and word usage books do have a place in the editor’s arsenal. If you are editing a novel that takes place in the 1950s, slang from the 2000s won’t be very helpful. You want to be able to check meaning and usage that is relevant to the period in which the action takes place.

Authors are products of their times. Authors write with the words with which they are familiar, the words they grew up with, that they learned in their schooldays — words that may have been removed from the dictionary to make room for more current words. And just as authors are products of their time, so are editors. We tend to use words the way we were taught to use them, and occasionally learn from an astute editor that the way we used the word is no longer acceptable. (Someone very near and dear to me drives me crazy by constantly saying “cool”. But I do recognize the lexicon era from my much younger days :).)

What brought this to mind was an article in the September issue of The Atlantic, “When Good Words Go Bad” by Jen Doll (with a different title online: “How to Edit a Dictionary”). I remember some of the now-gone words, like “ostmark” and “tattletale gray.” Another word/phrase the article mentions is “complement-fixation test,” which I still come across in material I edit.

I have also noted changes in hyphenation of compound words/phrases.

An editor has to be word knowledgeable, but what does an editor do when a word needs to be checked but it isn’t in the dictionary? Today, the easiest path is to search the Internet. I’ve done that, but never have felt comfortable relying on such a search. I’m from the days when the value of a source was measured by the source’s (national or international) reputation. I don’t know an English language editor who wouldn’t agree that the OED is a reliable source or, for American editors, that Bryan Garner’s opinion as to word usage is more valuable than general Internet search results.

Consequently, I find that I am not only saving and using older versions of what I consider to be reputable sources, but that I am buying them when I come across them in bookstores. My path backward in time is a split road — some paths go back decades, some only an edition or two.

One of the most interesting resources I have is H.L. Mencken’s The American Language (4th ed., revised). I have the original fourth edition along with its several supplements, a multivolume discourse on and exposé of the American language. You can find these books and the supplements at places like AbeBooks.com (e.g., at this link) and other antiquarian book shops. They are not popular and thus are often inexpensive. I recommend buying them if you want to learn about the American language from a person who was a recognized language authority.

Although I’ve gotten a bit sidetracked, the point I’m trying to make is that my outlook about resource books has changed. In my youth, I would never have considered having and using prior editions of dictionaries or usage books. After all, I live today and my language should be of today, or so I thought.

Now that I am an older, wiser, and more experienced editor, I recognize that in the absence of those older resources, not only is language forgotten, but writings can become less meaningful. What bohemian meant in 1930 was not the same as it meant in 1950 or in 1970, and certainly not what it means today, but what it meant in 1930 might make the difference between understanding and not understanding the allusion Sinclair Lewis was making when he used the term in 1931.

I know I have written before about the resources a professional editor has (should have) on hand (see, e.g., Working Effectively as an Editor — New Print Resources and The Business of Editing: On My Bookshelf), but what I failed to discuss — perhaps even consciously recognize — is the value of prior editions of major resources in my day-to-day work.

Another interesting aspect is to see how respected resources have changed — “grown” or “matured” — over time, which is visible by comparing editions. When I have time, I’ll pick up the three editions of Bryan Garner’s American usage books and compare an entry. Sometimes the changes are subtle, sometimes they are more obvious, but what they always are is informative.

When I am uncertain about how an author has used a word — my recollection of its meaning being different than the author’s use would indicate — I’ll open a couple of editions of a dictionary and see what changes, if any, have occurred over the years.

What I have discovered is that being able to research through prior editions of a language resource has made me a better editor. It certainly impresses authors when I can give a meaningful comment that traces language usage and explains why the current word may not be the best choice. The corollary, I have also discovered, is that impressed authors ask my clients to be sure to hire me to do the editing on their book.

Do you keep a library of older resources that you have replaced? Do you use them or are they just taking up shelf space? Or are you an editor who relies on the Internet?

September 9, 2013

The Pluraling of Plurals

Before there is a mad rush to point out that pluraling is not a word, rest assured I know it — I created the word just for this article. I do realize that pluralizing might have been a better choice, but pluraling just struck me as appropriate.

I recently edited some medical material in which the term activities of daily living was used. The authors, after giving the expanded version of the term, used the initialism ADL, which is correct. ADL appears in Merriam Webster’s Collegiate 11e, American Heritage 5e, and Dorland’s Illustrated Medical Dictionary 32e; it does not appear in Stedman’s Medical Dictionary 28e — at least not as ADL; in Stedman’s it is ADLs. In the AMA Manual of Style 10e, it appears as ADL with a parenthetical “(but: 1 ADL, 6 ADLs).” The Scientific Style and Format (CSE Manual) 7e doesn’t specifically say anything about ADL, but it does say, “If the abbreviated term is itself a plural, do not add the ‘s’.”

Alas, the usage bible, Garner’s Modern American Usage, says nothing about ADL, but does say this about the baseball initialism RBI (runs batted in): “three RBIs” is correct and “three RBI” is incorrect based on long-time use.

In the book I was editing, the authors sometimes used ADL and sometimes ADLs, presumably to distinguish between the singular activity and the plural activities. I changed every instance of ADLs to ADL. After all, one doesn’t have activities of daily livings, activities is already plural, and, most importantly, the authors correctly defined (expanded) ADL as activities, not activity. Unsurprisingly, I suppose, the in-house editor insisted that where the authors used ADLs it was to be left as ADLs, although the in-house editor did ask for my justification for making the change.

The Dorland’s, Collegiate, and American Heritage dictionaries offer only ADL, not ADLs, and define ADL as activities, not activity; Stedman’s also uses activities in the definition, but does so for ADLs — Stedman’s doesn’t offer ADL as an option. The AMA Manual is the only source that offers both ADL and ADLs. The in-house editor cited AMA Manual as the authority for having both usages.

And this is the problem: When is pluraling of a plural OK?

When you read an initialism, you read the letters of the initialism individually. Some believe that reading each letter individually (i.e., A-D-L) is the proper way to deal with an initialism. For example, when we see MRI, we read it as M-R-I, not as magnetic resonance imaging, even though that is the expanded form of MRI. Others think that with initialisms, the proper thing to do is to read it as magnetic resonance imaging, not M-R-I. In other words, the conflict is between reading an abbreviation as letters versus reading it in its expanded form or as a word.

That difference is what determines whether ADLs is acceptable. We already know that ADL is a plural by its very definition. Even the AMA Manual defines it as a plural. But the AMA Manual, in this one exception, advocates pluraling a plural. Interestingly the AMA Manual is somewhat contradictory. In the same list of terms where it plurals the already plural ADL, it defines the initialism ACS as acute coronary syndromes. Should it not be ACSs by the same logic as ADLs? What if there is only one syndrome? What would its initialism be?

The AMA Manual adds no special note to the initialism YLD (years living with disability) or YPLL (years of potential life lost) as it did with ADL. Presumably, in the case of YLD, it is because the subject of the phrase is not years but disability, which is singular, and so YLDs‘ expanded form would be years living with disabilities. Similarly, with YPLL the subject is not years but life, so, conceivably, it could be made plural as YPLLs, or years of potential lives lost.

The pluraling of YLD and YPLL makes sense because the subjects are not already plural. But in activities of daily living, the subject — activities — is already plural. The AMA Manual asks that we plural a plural.

The radical, but logical, solution is to redefine ADL as the singular activity of daily living. Just as the AMA Manual has arbitrarily determined that ADLs is correct, it could arbitrarily redefine ADL.

With all of this arguing over ADL versus ADLs and whether we are pluraling a plural, we have not gotten to any kind of rule to govern when (or if) a plural should be pluraled. I suspect that is because the rule already is that — except in the case of ADL — plurals should not be pluraled. The justification for ADL is that readers read this initialism as A-D-L when they come to it rather than expand it mentally to activities of daily living.

Alright, the argument is a bit obtuse, scattered, trying to put a square peg in a round hole, etc., but the one question that the in-house editor and the AMA Manual have so far failed to answer is this:

If ADL is universally defined as activities [plural] of daily living, what is the universally agreed upon definition of ADLs [pluraled plural]?

This is important because when editing, we need to define acronyms/initialisms at first use and I can’t come up with a definition that I can point to that separates ADL from ADLs — unless I use the outlier, Stedman’s. Stedman’s doesn’t recognize ADL, only ADLs, and it defines ADLs as activities [plural] of daily living, which leaves ADL to mean activity [singular] of daily living.

I view the pluraling of ADL much like pluraling of common units of measure — we don’t do it and it shouldn’t be done. The abbreviation oz is used for both ounce and ounces. The same is true of other measures, like L (liter/liters), mL (milliliter/milliliters), in (inch/inches), etc. In the case of ADL (and other similar plurals), I find little logic to pluraling a plural.

What do you think? How would you defend the in-house editor’s decision?

« Previous PageNext Page »

Create a free website or blog at WordPress.com.

%d bloggers like this: