An American Editor

November 4, 2010

A Musical Interlude: Odes to Editors

Editors receive a lot of abuse these days and often get blamed for errors that are inflicted by others in the production chain. That is not to say that we editors aren’t often the causes of errors, either from a failure to catch the error or because we introduced the error in our zeal to demonstrate our editorial prowess.

But, as the following video demonstrates, there are some who still prize our skills:

Yet the question is whether our profession is in decline. Gus is a new entrant to our profession:

As Carmela demonstrates, we aren’t sitting still and letting the world pass us by. We constantly seek new and better ways to improve the work of our clients:

The importance of proofreading cannot be overstated. Taylor Mali illustrates the problem of editing by homonym:

Knowing how difficult things can be in the editing world, it was only a matter of time until we had our own lament:

July 7, 2010

Worth Noting: Words by Tony Judt

As I have mentioned several times over the life of this blog, I am a subscriber to The New York Review of Books. In a recent issue of the NYRB, Tony Judt, an historian, wrote a column titled “Words.” This is a column well-worth reading.

Judt discusses inarticulacy and how the education of the 1950s and early 1960s taught students to speak and write with precision, to be articulate so that others could comprehend what was being communicated. He goes on to lament the “revolution” of the 1970s and subsequent years that lessened the emphasis on articulation and heightened the emphasis on the idea being more important than its expression, and thus a rise in inarticulacy. As Judt, put it:

All the same, inarticulacy surely suggests a shortcoming of thought. This idea will sound odd to a generation praised for what they are trying to say rather than the thing said. Articulacy itself became an object of suspicion in the 1970s: the retreat from “form” favored uncritical approbation of mere “self-expression,” above all in the classroom.

Perhaps more alarming is Judt’s analysis of academic writing:

The “professionalization” of academic writing—and the self-conscious grasping of humanists for the security of “theory” and “methodology”—favors obscurantism.

The obscurantism of which Judt complains, I see daily in my work as an editor. How much trouble are we in when our best-educated people are unable to express themselves with clarity — or are unwilling to do so? Leadership is usually top-down, not bottom-up. More importantly, if the best educated are unable to recognize their own obscurantism, how can we expect them to correct (or even identify) obscurantism in others? Or if they can identify it, correct it?

As Judt notes, when words become Humpty Dumptyish (i.e., they have multiple meanings but mean only what I say they mean), the ideas the words express also become Humpty Dumptyish, that is, meaningless, because there is no foundation by which they can be understood globally. When the ideas become Humpty Dumptyish, they become anarchic and chaotic. Perhaps this is the problem in today’s partisan politics — political ideas have no meaning because they have so many meanings. The pomp becomes more important than the circumstance (perhaps a diplomatic-world failing) and the standard becomes that of text fragments.

I recall how unhappy I was when I discovered that my daughter’s high school English teacher (and this was in the early 1990s) had no idea that a sentence was composed of words that undertook important parts of speech, such as noun, verb, adverb, each designed to contribute to a universal understanding of the message. Yet this teacher was responsible for grading my daughter’s grasp of English, as well as teaching my daughter how to grasp English. Sadly, it appears that the situation continues to deteriorate, if some of the books I edit are an indication of the articulateness of the current generation of academic authors.

I have often thought about what it is that can be done to reverse course. I sure would hate to discover that but for inarticulacy war could have been avoided. I also wonder how many mishaps that we are now paying for occurred as a result of President George W. Bush’s inarticulacy. Alas, I do not see an easy road to resolution; rather, I see the problem getting worse. I see it getting worse because of the difficulty in focusing.

I think the problem of inarticulacy is exacerbated by the “need” to multifunction. Few of us use a laser-like focus in our daily lives; we need to handle multiple things simultaneously and so we take a shotgun approach, hoping the “effective” zone of the spread is sufficient. We also reward the ability to multifunction, regardless of how effective the multifunctioning is. The old saying was to handle one problem at a time; today’s saying might better be handle all problems simultaneously and hope for the best.

Reversing the inarticulacy trend is probably impossible because too few people are knowledgable about how to be articulate — and because too many people would resist the necessary steps as being an infringement of their freedoms. Imagine if suddenly every parent was told that for their child to graduate from elementary school to middle school the child had to show proficiency in debating skills. (Of course, the first objection, and rightfully so, would be the teachers can’t show that proficiency so why should my Susan show it?) Part of the problem is the texting mindset. How do you overcome the fragmentary expression culture that it creates?

As articulation decreases and inarticulacy increases, I wonder what will become of our society 50 years from now. Would those of us educated in the 1950s and 1960s be able to communicate effectively in that future? Will the United States become a third-rate country because of dysfunctional communication skills? Will editors have a role in such an anything-goes-writing-milieu?

June 8, 2010

The WYSIWYG Conundrum: The Solid Cloud

Note: I wrote the following article as a guest piece for Kris Tualla’s Author & Writing Blog, where she has been discussing ebook publishing, among other topics. Kris published The WYSIWYG Conundrum on June 3, 2010, as part 6 of her series “The Death of Traditional Publishers?” I recommend reading the other articles in the series as well as her blog for an author’s look at the publishing world.

________________

We’ve had this discussion about the value and importance of professional copyediting but it seems that it is a topic that just won’t die in the eBook Age. As I have noted before, too many authors believe that they are capable of doing everything themselves while producing a superior product. I admit that out of 1 million authors (in 2009, more than 1 million books were published) there are a handful who can do it all themselves and even do a very credible, if not superb, job — but it is a handful. As my grandfather used to say about a neighbor who thought he could do it all, “jack of all trades, master of none.”

Like writing, editing is a skill. It is a developed skill, that is, experience brings a higher level of editing quality just as an author’s second novel is often better written than the first as the author’s experience grows. There is a significant level-of-quality difference between a well-experienced professional editor’s skill set and a nonprofessional editor’s skill set.

When we look at a sentence, we see what we expect. When we look at thick clouds, they look solid enough to walk on (do you remember being a child and talking about how someday you were going to walk among and on the clouds?), but as we know, our expectation that they can support us is a false expectation. What we see is not what we get — the WYSIWYG conundrum!

The same is true of words on paper (or computer screen). We often see what we expect, not what is really there. If we always saw only what was really there, we could turn out perfect manuscripts every time. But the truth is that if you hand a manuscript to 5 different people, each of the 5 will find something that the other 4 missed, in addition to what all 5 do find.

Think about eyewitness identification. This is a field that has been explored by scientists for decades and the conclusion hasn’t changed: eyewitness identification is one of the least-reliable forms of evidence because the eyewitness has certain expectations that unconsciously get fulfilled, even if those expectations deviate from the facts. (If you haven’t watched it recently, I highly recommend Twelve Angry Men with an all-star cast lead by Henry Fonda.)

Professional editors provide a dispassionate look at an author’s work. They provide a skilled, experienced eye that is trained to find the kinds of errors that the author, who is intimately familiar with the manuscript, will miss when he or she tries to self-edit. A good author lives with his or her manuscript for months and years, lives with the characters, and lives with the plot. The author knows how the heroine spells her name and whether or not she is left-handed, the color of her eyes, and all the other important details. Consequently, it is not unusual for an author who is self-editing to miss the extra “r” in Marrta because the author expects to see Marta. Our mind skims over minor errors, converting them into what should be because we have trained ourselves to see it as it should be.

It is this role that the professional editor, the “indifferent” or “dispassionate” set of eyes, fills. The professional editor can stand back — aloof — from what the author has lived with and can note the misspelled or changed name, that in 20 other instances the heroine was left-handed but now is right-handed, the sentence construction that the author understands but the reader doesn’t. If nothing else, this last item can be the most valuable service the professional editor provides an author — making sure that the story, the plot, the characters can be followed by the reader.

Authors tend to forget that most readers read a novel once and then never look at it again. They also tend to think that their work deserves the same intense scrutiny that a reader would give to a nonfiction book about the theory of relativity, but novels are intended to entertain, which means nonintense reading. The reader does not want to have to spend time trying to follow the storyline and certainly does not want to study the text to make it understandable. But the author rarely is capable of standing in the reader’s shoes because of the intimate relationship the author has with characters, plot, and storyline. The author knows where it should be going and expects it to go there; the reader doesn’t know, doesn’t have the intimate knowledge needed to draw everything together in some logical fashion. The author’s job is to draw it all together for the reader, but if the author can’t stand in the reader’s shoes, the author can’t honestly judge how well he or she has accomplished that task. The professional editor can because the professional editor is disinterested; there is a difference between one’s passion and one’s job that enables one to stand back and look objectively at one’s job but with bias at one’s passion.

Professional editors bring many skills that are complementary to the author’s skills to the table. These skills cannot be brought to bear on the project by the author because the author cannot separate him- or herself from his or her writing. The author suffers from the WYSIWYG conundrum: the author sees what the author expects to see.

The authors who recognize this conundrum and who take steps to have their work professionally edited are the authors who enhance both their readers’ enjoyment and their likelihood of success in an overcrowded marketplace. Success is much more than the number of downloads of free or 99¢ ebooks, especially when there is no way to know how many of those downloads actually were read or well thought of. Instead, success is having readers clamor for your books, talk about your books, express a willingness to pay a higher price for your books — all things that a professional editorial eye can help an author achieve by preventing the kinds of mistakes that turn readers away.

May 20, 2010

Editors & “Professional” Resources: A Questionable Reliance

Editors rely on lots of “professional” resources to guide their editorial decisions when working on a manuscript. In addition to dictionaries and word books, we rely on language usage guides and style manuals, among other tools. [To learn more about the professional editor’s (and my) bookshelf, see The Professional Editor’s Bookshelf.]

But it isn’t unusual for an author (or publisher) to have a different view of what is appropriate and desirable than the “professional” resources. And many editors will fight tooth and nail to make the client conform to the rules laid down in a style manual. As between language usage guides like Garner’s Modern American Usage and style manuals like The Chicago Manual of Style, I believe that editors should adhere to the rules of the former but take the rules of the latter with a lot of salt.

The distinction between the two types of manuals is important. A language manual is a guide to the proper use of language such as word choice; for example, when comprise is appropriate and when compose is appropriate. A style manual, although it will discuss in passing similar issues, is really more focused on structural issues such as capitalization: Should it be president of the United States or President of the United States? Here’s the question: How much does it matter whether it is president or President?

When an author insists that a particular structural form be followed that I think is wrong, I will tell the author why I believe the author is wrong and I will cite, where appropriate, the professional sources. But, and I think this is something professional editors lose sight of, those professional sources — such as The Chicago Manual of Style (CMOS) and the Publication Manual of the American Psychological Association — are merely books of opinion. Granted we give them great weight, but they are just opinion. And it has never been particularly clear to me why the consensus opinion of the “panel of experts” of CMOS is any better than my client’s opinion. After all, isn’t the key clarity and consistency not conformity to some arbitrary consensus.

If these style manuals were the authoritative source, there would only be one of them to which we would all adhere; the fact that there is disagreement among them indicates that we are dealing with opinion to which we give credence and different amounts of weight. (I should mention that if an author is looking to be published by a particular publisher whose style is to follow the rules in one of the standard style manuals, then it is incumbent on the editor to advise the author of the necessity of adhering to those rules and even insisting that the author do so. But where the author is self-publishing or the author’s target press doesn’t adhere to a standard, then the world is more open.)

It seems to me that if there is such a divergence of opinion as to warrant the publication of so many different style manuals, then adding another opinion to the mix and giving that opinion greater credence is acceptable. I am not convinced that my opinion, or the opinion of CMOS, is so much better than that of the author that the author’s opinion should be resisted until the author concedes defeat. In the end, I think but one criterion is the standard to be applied: Will the reader be able to follow and understand what the author is trying to convey? (However, I would also say that there is one other immutable rule: that the author be consistent.) If the answer is yes, then even if what the author wants assaults my sense of good taste or violates the traditional style manual canon, the author wins — and should win.

The battles that are not concedeable by an editor are those that make the author’s work difficult to understand and those of incorrect word choice (e.g., using comprise when compose is the correct word).

A professional editor is hired to give advice. Whether to accept or reject that advice is up to the person doing the hiring. Although we like to think we are the gods of grammar, syntax, spelling, and style, the truth is we are simply more knowledgeable (usually) than those who hire us — we are qualified to give an opinion, perhaps even a forceful or “expert” opinion, but still just an opinion. We are advisors giving advice based on experience and knowledge, but we are not the final decision makers — and this is a lesson that many of us forget. We may be frustrated because we really do know better, but we must not forget that our “bibles” are just collections of consensus-made opinion, not rules cast in stone.

If they were rules cast in stone, there would be no changes, only additions, to the rules, and new editions of the guides would appear with much less frequency than they currently do. More importantly, there would be only one style manual to which all editors would adhere — after all, whether it is president or President isn’t truly dependent on whether the manuscript is for a medical journal, a psychology journal, a chemistry journal, a sociology journal, or a history journal.

Style manuals serve a purpose, giving us a base from which to proceed and some support for our decisions, but we should not put them on the pedestal of inerrancy, just on a higher rung of credibility.

May 19, 2010

On Words: Politics, Political, and Their Progeny

Okay, I know this is dangerous territory, but I heard a speech by Robert Reich recently in which he amused his audience by defining the origins of politics. Professor Reich noted that poli is from the Greek polis and polites, or city and citizen, respectively, and that tics are blood-sucking insects. Although I found his definition amusing, and perhaps a bit accurate in our current state of political partisanship, I began to think about politics, political, and their various progeny. So here goes a look at the words and a political rant.

One source says politic is a late Middle English word derived from the Old French politique, via Latin from the Greek politikos. A different source traces its roots to a borrowing in 1427 from the Middle French politique. In the end, the birth is the same — from the Latin politicus and the Greek politikos.

But deviant forms also appeared. Politician appears to have been coined in 1588 and meant a shrewd person (and today we might mean a shrew person). One year later the meaning had morphed to a person skilled in politics. And today, when we say someone is a political animal, we can thank Aristotle and a translation from the Greek of his words politikon zoon, whose literal meaning was “an animal intended to live in a city.” Interestingly, polecat, a possible term of endearment for a politician, doesn’t have the same roots as politics.

Politics as the science and art of government dates from the 16th century. Political science first appeared in 1779 in the writings of David Hume. Political appeared in 1551 and was the English formation, believed to have its roots, again, in the Latin politicus with the addition of the English al. Politics is one of those few words that is both singular and plural, depending on context and usage.

In American English, politician originally was a noun that referred to the white-eyed vireo (Vireo griseus). In Wilson’s American Ornithology (v. II, p. 166) published 1804, the vireo was described as: “This bird builds a very neat little nest…of…bits of rotten wood,…pieces of paper, commonly newspapers,…so that some of my friends have given it the name of the Politician.” Could this have been the first linking of rotten and politician? (Okay, perhaps a bit harsh.) In 1844, Natural History repeated Wilson’s association. And it was repeated again in 1917 in Birds of America.

In 1914 the Cyclopedia of American Government defined political bargain as “an agreement, usually corrupt, between contending political factions or individuals….” Seems like nothing has changed in 100 years.

Today, politician and political are simply synonyms for stalemate, for corruption, and for abuse. Alright, that’s cynical, but I’m tired of politics and politicians as usual because that is what it generally amounts to — the grinding to a halt of the country’s business to satisfy the egos of those who wield the political power and those who can buy it — especially now that the U.S. Supreme Court has given license to unlimited corporate spending in political campaigns. I can see it now: Goldman Sachs will spend $500 million — probably 1 day’s profits — to buy the next Congress, against which my paltry $500 contribution will be like a single grain of sand thrown at the Rock of Gibraltar as my attempt to influence the crumbling of the Rock.

America is quickly becoming the land of the extremes, a place where centrists, which is what most of us are, wield little to no influence, and a land where doublespeak is the language of the day. (I’m still waiting for my Tea Party neighbors who rail against socialized medicine to give up their Medicare [can I suggest a Burn Your Medicare Card rally?]. The day I see that happen will be the first day I really believe that the Tea Party is a semi-honest political movement. Until then it looks like a “me first and only” movement.)

Group greed is what seems to move America today. In my local school budget vote, my city’s school budget was soundly defeated by a 3:1 margin. I admit that for the first time in my life I voted against a school budget — and that’s a lot of votes cast over many years. The final straw was when the teachers refused to make any sacrifice whatsoever, claiming that they needed their raises and continued free benefits because their living costs have been rising. Are they so naive as to think no one else’s living costs have also been rising, that they are unique?

I know that in reality nothing has changed. Today’s group greed is the same as yesterday’s, only the groups have changed. But somewhere someone besides me must recognize the lack of equilibrium between lower taxes and maintaining or increasing government services. Something has to give. It’s like the demand for electricity to power our summer air conditioners — we want more without brownouts but we don’t want to build the infrastructure to provide more; we want less reliance on foreign oil but we want ever larger and powerful automobiles; we want our children to breathe clean air but we oppose cap-and-trade legislation.

Makes me wonder who the children really are!

May 6, 2010

On Books: An Analytic Dictionary of English Etymology

I am very interested in the etymology of words. Consequently, I tend to look for and buy books about language and words. Perhaps the best dictionary-type source of English etymology is Anatoly Lieberman’s An Analytic Dictionary of English Etymology: An Introduction (2008, University of Minnesota Press).

A professor of Germanic philology at the University of Minnesota, Lieberman has authored numerous books and articles on the subject of etymology. An Analytic Dictionary is probably his most important work. A relatively sparse book in terms of words discussed (only 55 are addressed), Lieberman introduces a new methodology for reporting etymology. Whether this methodology will be broadly adopted remains to be seen, but it certainly has my vote.

Most etymology dictionaries provide word origins as if the origins are undisputed. In some cases, they do not tackle a word’s origins, noting instead that the origins are “unknown”; in other cases, they present the origins but do not note that the origins are disputed. Rarely do they provide a complete etymology.

Consider the word boy. The Oxford Dictionary of Word Histories begins its discussion (which is a single, short paragraph) by assigning the origin to Middle English and saying “the origin is obscure.” Chambers Dictionary of Etymology gives slightly more detail but also finds the word to be of “uncertain origin.” Lieberman, in contrast, provides 8 pages of etymological information, discussing all of the existing derivations, all of the research and speculation, and then choosing what he believes to be the likeliest. However, the reader has enough information to draw his or her own conclusion as to the likely origins and a solid basis for further research.

For those with an interest in English etymology, Lieberman’s effort is an important contribution to the subject. Unlike other dictionaries that simply synopsize a word’s history without giving the reader any source information, Lieberman takes great pain to be sure to discuss earlier etymological works, exposing the reader to significantly more than just the conclusion. My hope is that Lieberman will followup with additional volumes of the dictionary and that other etymologists will adopt Lieberman’s approach to word history.

April 16, 2010

On Words: Bogeyman

I don’t recall what I was reading, I think it was a newspaper article, but suddenly I was faced with a word I hadn’t seen or heard in quite a number of years: bogeyman.

I’ve never given the word much thought; I’ve always subconsciously thought I understood what it meant — after all, how many times does a child need to be told to beware the bogeyman before the child understands that bogeyman (also spelled boogeyman) doesn’t mean a treat? But the recent reading of the word (and if I had to venture a guess, I’d say I read it in a political commentary) made me wonder about the word. So here goes a short exploration of bogeyman.

Starting with the dictionary definition, a bogeyman (n.) is a “terrifying specter; a hobgoblin” (American Heritage Dictionary 4e). Merriam-Webster’s Collegiate Dictionary 11e gives a more child-oriented definition: “1: a monstrous imaginary figure used in threatening children 2: a terrifying or dreaded person or thing.”

Bogey was the name for a mischevous spirit and was first recorded as a proper name for the devil, who is sometimes referred to as Colonel Bogey. The origin appears to be unknown but the word seems to be related to the 16th century’s bogle, which meant ghost or phantom.

It appeared in 1836 in Bogey the Devil and in a popular song of 189os entitled “The Bogey Man,” which was a reference to a golf game. In 1890, Dr. Thomas Browne was playing against Major Wellman, the match being against the “ground score,” which was the name given to the scratch value of each hole. The system of playing against the “ground score” was new to Major Wellman, and he exclaimed, thinking of the popular song of the moment, “The Bogey Man,” that his mysterious and well-nigh invincible opponent was a regular “bogey-man.”

Bogeyman is likely related to the Scottish bogill, although there is also some claim that it is derived from the English word boggart, a shapeshifting creature, often black and hairy, that hides under your bed or in your closet until after sundown.

Regardless of its origins, bogeyman has become a part of the everyday lexicon, likely as a result of its use to threaten children. If you have any more information to add to the etymology of bogeyman, please do so. Sources seem to be scarce.

April 12, 2010

Editors in the Offshore World

Editing has come a long way, baby! All the way from being a local skill set to an anywhere-anytime-anyone skill set. How many times has a neighbor said to you: “I just finished reading XYZ and spotted 3 errors. I could have been an editor!”? That the neighbor missed 200 other glaring errors is beside the point — that everyone thinks he or she can be an editor is the point. And that publishers think that anyone anywhere can edit a book for a local audience is also the point.

No matter where you go in this world you will find the same two editorial classes: those who can edit and those who can’t. Pick a country — doesn’t matter which one — and you’ll find the two classes. So the problem isn’t that the local country has a monopoly on good editors; the problem is more intricate than that.

Editors are a reading class, or a class of readers. Editors tend to be book lovers — why else would one want to wade through some of the drivel editor’s see? As book lovers, editors (as a class) tend to spend a larger proportion of their earnings on reading material than noneditors (as a class). But as editors’ incomes decline, so does the amount of money that they can spend on reading material. And let’s face it — no matter how you slice and dice it, an editor in Ukraine is unlikely to be a large consumer of books for the American audience, just as the American editor is unlikely to be a big consumer of books for the Ukrainian market.

How, you ask, does this relate to offshoring? Think about the books we buy in the United States. When a book refers to “Appalachian-level poverty,” the U.S. reader understands the metaphor. Would the Indian or Australian reader understand its symbolism? (Yes, we can all point to the one or two U.S. folk who wouldn’t understand and to the one or two Indians or Australians who would — let’s not get quite so picky.) Would the Indian or Australian editor understand the connotations of equating living in, say, Los Angeles with living in the Mississippi Delta?

This isn’t a one-way street. There is much I wouldn’t know or understand about a metaphor relating to a French or German localism that the French or German reader and editor would understand immediately. And offshoring affects the German editor, the Australian editor, the British editor, as much as it affects the American editor. The Internet has made us all vulnerable to offshoring. Also remember this: Today’s offshore choice is likely to find itself scrambling within a decade as some other  place becomes the new cheap heaven.

When a publisher offshores editorial work, the publisher not only raises editorial problems because of localisms but also deprives a portion of its audience of the means to buy the book. It is a circle of quality plus buying power. It is true that publishers offshore to save immediate costs but at the penalty of lost future sales.

Of course, those of us who earn our livelihoods as editors know that when the publisher offshores, the publisher really is not offshoring a single part of the editorial and production cycle. What the publisher does is offshore the whole editorial-production package to a “packager.” The packager then re-offshores the editorial work by trying to hire local editors, that is editors local to the country where the book will be published.

The problem isn’t the offshoring to the packager who then re-offshores back to the local country; the problem is that the packager has cut a deal with the devil in order to get the original work. The publisher offshores to cut editorial-production costs; the packager promises reduced editorial-production costs; the packager cannot provide the editorial part in its own locale so it re-offshores the editorial work. But because of the promised savings, the packager has to short-change something and so it short-changes editorial.

Editors in the United States saw this in practice not too many months ago when a mass e-mail was sent by an offshore packager looking for experienced STM (science, technical, and medical) editors to edit book and journal manuscripts, saying: “We’re dealing with International clients only so they need very high standard of Quality and on time delivery so there will not be any compromise on these front. [sic]” In addition to the “high standard of Quality” editing, the packager required “a Non-competent [sic] agreement between us.”

A high-quality edit of STM material is time-consuming, and experienced STM editors know that such a requirement means a churn rate of about 5 manuscript pages an hour. Add in the requirement of a noncompete agreement and an editor would think the fee would be a reasonable one. Alas, this packager offered “Copyediting – $0.80 per page,” which meant that the editor would earn $4.00 an hour, slightly more than half of U.S. minimum wage.

This packager’s pricing was a bit extreme but not by much. This was my — and every editor who received the e-mail’s — Appalachian-level poverty moment.

I don’t know how many U.S. editors this packager ultimately was able to hire at this rate, but if I were a publisher who cut the deal with this packager, I sure would wonder about the work quality and the skill set of the hired editors. Of course, it also makes me wonder what the rate would be for nonspecialty editing or for inexperienced editors.

Offshoring of editorial functions to packagers because of a combined low package price is problematic on multiple levels. If publishers indirectly cause the demise of the local editorial class, they are also causing the demise of a significant segment of their buyers. If the editorial quality remains consistently poor, something we are seeing in ebooks, and which is creeping up in pbooks, people will ultimately become so frustrated with the reading experience that they will rebel at paying for it. If I have to suffer through a poorly edited book, I may as well suffer through one that cost me nothing — something that is increasingly seen with ebooks. Publishers and authors can protest low pricing claiming their products are valuable, but the marketplace, as offshoring of editorial work continues, may well view the products as worth nearly nothing.

I am reminded of what happened after World War II with the rise of the Japanese economic empire. Made in Japan became synonymous with very cheap and mediocre to poor quality; made in America was equated with expensive and high quality. But books in American English don’t carry a label that says “edited in Somalia” so readers assume — often incorrectly — that the editing — good, bad, or indifferent — was done by local editors (assuming it was edited at all). Perhaps books should be required to carry origin labels just like other products.

Books aren’t like TVs. Book editing requires a more intangible skill set than does assembling a TV; it requires knowledge and decision-making prowess. It is not repeatedly putting the same part in the same place. Although some aspects of editing can be automated, there still needs to be a decision maker who can decide between “know” and “no.”

Offshoring affects editors everywhere because economies rise and fall and today’s cheap labor becomes tomorrow’s expensive labor, so today’s editors who receive the offshored jobs will tomorrow find their jobs offshored. It also affects publishers everywhere. It is their need to produce localized books to sell to the local market at a price the local market will bear for quality that the market expects that is being jeopardized.

There is no easy answer, but with book prices climbing, editorial quality needs to keep pace. Perhaps the answer is to standardize the world’s languages into a single language with no localisms permitted.

April 9, 2010

On Words: Jim Crow

Last week I came across Jim Crow in two different magazines: the first was in the current issue of American Heritage and then in the current week’s The Economist. Jim Crow is not an unknown or rarely used term. It is commonly found in American history books dealing with slavery and segregation and is found in magazine articles discussing segregation, the civil rights movement, and the history of racism. I understand what it means (systematic discrimination against and segregation of blacks, especially as practiced in the southern United States after the Civil War and until the mid to late 20th century) and that it is an epithet reserved for the racial group being discriminated against. But I never knew its origins.

Jim Crow was the stage name of a black minstrel character in a popular song and dance act performed by Thomas Rice about 1835. Rice was known as the “father of American minstrelsy.” Following Rice, other performers performed the Jim Crow character.

The song on which Rice’s act was based first appeared in an 1828 play called Jim Crow. The play’s song had the refrain “My name’s Jim Crow, Weel about, and turn about, And do jis so.” Rice’s version used the refrain “Wheel about and turn about and jump Jim Crow.” The song was so popular that newspapers and reviews in the 19th century often referred to it; for example, the Boston Transcript (March 25, 1840) wrote: “Tell ’em to play Jim Crow!” In 1926, the New York Times (December 26) wrote: “From ‘Old Jim Crow’ to ‘Black Bottom,’ the negro dances come from the Cotton Belt, the levee, the Mississippi River, and are African in inspiration.” The 1849 Howe Glee Book stated: “Toe and heel and away we go. Ah, what a delight it is to know De fancy Jim Crow Polka.”

Perhaps the musical origins were not innocent, but they did not carry the malice of subsequent uses, particularly as Jim Crow was used following Reconstruction after the Civil War.

The first recorded use of the word crow in its derogatory sense was by James Fenimore Cooper in his 1823 book The Pioneers, in which he used crow as a derogatory term for a black man.

One of the earliest uses of Jim Crow as a derogatory term not associated with the song or the minstrel act, was in 1838, when “Uncle Sam” in Bentley’s Miscellany wrote: “Don’t be standing there like the wooden Jim Crow at the blacking maker’s store.” And one of the earliest direct, no mistake about, uses of Jim Crow as a racist term was in the Playfair Papers (1841): “A portmanteau and carpet bag…were snatched up by one of the hundreds of nigger-porters, or Jim Crows, who swarm at the many landing-places to help passengers.” In 1842, Jim Crow car meant a railroad car designated for blacks. Harriet Beecher Stowe, in Uncle Tom’s Cabin (1852), wrote: “I thought she was rather a funny specimen in the Jim Crow line.”

But Jim Crow as a political term came into its own following Reconstruction. The Nation of March 17, 1904, reported that “Writing of the ‘Jim Crow’ bills now before the Maryland Legislature, the Cardinal expressed his strong opposition.” Two months later, the Richmond Times-Dispatch (May 25, 1904) reported: “The Norfolk and Southern Railroad was fined $300 to-day for violating the ‘Jim Crow’ law by allowing negroes to ride in the same car with whites.” The previous year, the New York Sun (November 29, 1903) reported that “The members of the committee have arranged with the parents of negro children to send them all to the Jim Crow school, thus entirely separating the white and negro pupils.”

The New World (1943) discussed Jim Crowism: “Negro soldiers had suffered all forms of Jim Crow, humiliation, discrimination, slander, and even violence at the hands of the white civilian population.” Time reported in 1948 (December 13) that “The Federal Council…went on record as opposing Jim Crow in any form.” And in what became a prescient statement, the Daily Ardmoreite of Ardmore, Oklahaoma, wrote on January 22, 1948: “What they call a ‘Jim Crow’ school cannot meet the federal court’s requirements for equality under the 14th amendment.” This was subsequently confirmed in Brown vs. Board of Education (1954) by the U.S. Supreme Court.

Many more examples are available of Jim Crow and its morphing from a popular song to a derogatory term. No history of the word can take away the harm and the hurt Jim Crowism inflicted on innocent people. Even today Jim Crow remains a blight on the reputation of the South. It wasn’t until the mid-1950s that Jim Crow began its death spiral. As each year passes, Jim Crow increasingly becomes a relic of history — where Jim Crowism belongs.

April 2, 2010

On Words: Clinch and Clench

In a recent New York Times article, U.S. Senator Robert Bennett (Republican of Utah) was quoted as saying “…it was through clinched teeth that they welcomed me.…” Immediately, I thought “you mean ‘clenched teeth.'” Although I was certain clench was correct, I decided I better check.

In olden days, way back in the 16th century and perhaps even earlier, clinch and clench were identical in usage terms — they meant and referred to the same thing. Clench, a verb, can trace its roots to about 1250 and to clenchen from The Owl and the Nightingale. Clenchen developed from the Old English beclencan, meaning to hold fast, and has Germanic roots (i.e., klenkan in Old High German and klenken in Middle High German, both of which meant to tie, knot, or entwine).

Clinch came into being about 1570 as a variant of clench, as a verb meaning fasten firmly. Approximately 60 years later, the noun clinch, meaning to settle decisively (the figurative sense) came into use. Clincher took a sidetrack; originally it was a noun (1330) to describe a worker who put in clinching nails. The first recorded use of clincher as meaning a conclusive argument or statement was in 1737.

Clinch became Americanized in the 19th century to mean the sense of a struggle at close quarters (1849) and morphed to mean a tight fighting grasp (1875). As its history shows, the general sense occurs early in English, but the modern technical use is American.

Along the way, clinch and clench became differentiated. In American usage, clinch became figurative and clench became physical. As Bryan Garner (Modern American Usage) puts it: “Hence you clinch an argument or debate but you clench your jaw or fist.” I have been unable to identify either the point at which usage shifted or any sources that can identify the shift. It isn’t clear to me the basis for Garner’s statement except that it comports with my understanding of the terms.

Even so, it isn’t clear from the dictionaries or from usage that Senator Bennett was clearly wrong in his use of clinch rather than clench. I concede that clench sounds better, sounds more correct, to my ear, and if I were his speechwriter, clench would be the word I would have chosen.

If you have any additional information on the separation of clinch and clench, particularly in the American lexicon, I would appreciate your sharing it with me.

« Previous PageNext Page »

Blog at WordPress.com.

%d bloggers like this: