An American Editor

September 16, 2011

A Humor Interlude: Bad Grammar — The Way I Are

Filed under: A Humor Interlude — Rich Adin @ 4:00 am
Tags: , , , ,

It has been a while since our last humor interlude, so I thought we deserved a break. And here it is — Bad Grammar: The Way I Are.

I’m just not convinced that humourous is the best way to describe the parodied behavior. Bad grammar is rapidly becoming the standard.

July 19, 2011

In Search of the Semicolon

The trend in punctuation seems to be less is more; that is, it is better to have less punctuation than to have more punctuation. The trend began with the comma, but seems to be spreading to other non-sentence-ending punctuation; to-wit, the semicolon.

The semicolon is a time-honored punctuation mark to separate two or more independent clauses that are joined without a coordinating conjunction or by use of a conjunctive adverb such as however, therefore, thus, and furthermore. The semicolon is also used to separate elements in a series that is long and complex or that has internal punctuation.

The purpose of using the semicolon is to bring clarity to what might otherwise be a confused or misleading sentence.

I recently edited a book in which I made consistent use of the semicolon — only to receive instruction from the client to replace the semicolons with commas. When I asked why, the response was that neither the particular inhouse editor nor the author approves of semicolons and thus they wanted use of semicolons minimized.

What does a professional editor do? The reality is that the professional editor has little choice. He who pays the piper can call the tune! Unfortunately, this attitude toward the semicolon is symptomatic of a very minimalist trend in editing: The author’s choices are sacrosanct unless … (with unless never really being defined so that it can be consistently applied).

With the passing of each day, we move further away from good grammar being a goal to strive for and closer to the Twitter standard of language — short and ungrammatical, isolated statements that convey an imprecise meaning.

Minimizing punctuation is not inherently a nefarious goal. After all, the purposes of punctuation are to interrupt an illogical flow and to make clear what would otherwise be unclear. Another purpose is to define the parameters of a written idea. Consequently, the less disruption via punctuation that is necessary, the clearer the statement being made and the better the communication from author to reader.

Yet being ruled by a broad mandate to “minimize the amount of punctuation” is to ignore the fundamental purpose of punctuation and grammar: to make clear what would otherwise be unclear. Stated another way: to enhance communication between writer and reader. What good does it do to spend hours creating a message that no one can understand?

I recently read a newspaper article whose headline was “For a full ride to graduate school, a tweet is the ticket.” (The headline differs depending on the source, but the article remains the same.) The University of Iowa was offering a full scholarship, worth about $37,000, to the best tweeter of a 140-character tweet in lieu of a second application essay. I understand that it takes time to read, analyze, and evaluate an essay, but a tweet in lieu of such an essay?

The University of Iowa is not the only institution to offer a tweet scholarship, and this worries me. As an editor I recognize that tweets are intended to be informal quips. I also understand that it takes great skill to condense a 1,000-word article (essay) to its 140-character essence. But to make that condensation something has to give, and what gives is spelling and grammar. I’m not so sure that I want to be medically treated by a doctor whose claim to fame is the he or she is a Twit who successfully condensed his or her life story down to 140 characters. Nor do I feel comfortable in following the business advice of a 140-character Twit. After all, it will be my money on the table, not the Twit’s money.

More important, however, is the message that is being sent about communication skills combined with grammar and spelling skills. Before Twitter, most of us considered grammar, punctuation, and spelling to be essential parts of good communication. Lack of skills in one meant a deficit in the others and incomplete communication at best, miscommunication at worst. That is being turned topsy-turvy as Twittering becomes the established route to success. With Twitter, every character counts, so it is better to write 8 than ate.

This also affects the professional editor because Twitter has no grammar or spelling standards. If the Twitter language becomes the norm and accepted, what we end up with is a free for all with no rules — no punctuation, no grammar construction, no misspelling — because every character counts. If authors and inhouse editors begin to accept this lack of rules as the standard, we will see a decrease in the need for editors and an increase in poorly written material (poorly, that is, in the sense of poorly communicating the author’s message to its audience).

I see the death spiral of the semicolon and comma as the harbinger of chaos to come. It is not that we should flood our work with punctuation but that we should be guided by what is best and necessary to communicate clearly and accurately, not by a desire to participate in the newest minimalist trend.

What do you think?

July 7, 2010

Worth Noting: Words by Tony Judt

As I have mentioned several times over the life of this blog, I am a subscriber to The New York Review of Books. In a recent issue of the NYRB, Tony Judt, an historian, wrote a column titled “Words.” This is a column well-worth reading.

Judt discusses inarticulacy and how the education of the 1950s and early 1960s taught students to speak and write with precision, to be articulate so that others could comprehend what was being communicated. He goes on to lament the “revolution” of the 1970s and subsequent years that lessened the emphasis on articulation and heightened the emphasis on the idea being more important than its expression, and thus a rise in inarticulacy. As Judt, put it:

All the same, inarticulacy surely suggests a shortcoming of thought. This idea will sound odd to a generation praised for what they are trying to say rather than the thing said. Articulacy itself became an object of suspicion in the 1970s: the retreat from “form” favored uncritical approbation of mere “self-expression,” above all in the classroom.

Perhaps more alarming is Judt’s analysis of academic writing:

The “professionalization” of academic writing—and the self-conscious grasping of humanists for the security of “theory” and “methodology”—favors obscurantism.

The obscurantism of which Judt complains, I see daily in my work as an editor. How much trouble are we in when our best-educated people are unable to express themselves with clarity — or are unwilling to do so? Leadership is usually top-down, not bottom-up. More importantly, if the best educated are unable to recognize their own obscurantism, how can we expect them to correct (or even identify) obscurantism in others? Or if they can identify it, correct it?

As Judt notes, when words become Humpty Dumptyish (i.e., they have multiple meanings but mean only what I say they mean), the ideas the words express also become Humpty Dumptyish, that is, meaningless, because there is no foundation by which they can be understood globally. When the ideas become Humpty Dumptyish, they become anarchic and chaotic. Perhaps this is the problem in today’s partisan politics — political ideas have no meaning because they have so many meanings. The pomp becomes more important than the circumstance (perhaps a diplomatic-world failing) and the standard becomes that of text fragments.

I recall how unhappy I was when I discovered that my daughter’s high school English teacher (and this was in the early 1990s) had no idea that a sentence was composed of words that undertook important parts of speech, such as noun, verb, adverb, each designed to contribute to a universal understanding of the message. Yet this teacher was responsible for grading my daughter’s grasp of English, as well as teaching my daughter how to grasp English. Sadly, it appears that the situation continues to deteriorate, if some of the books I edit are an indication of the articulateness of the current generation of academic authors.

I have often thought about what it is that can be done to reverse course. I sure would hate to discover that but for inarticulacy war could have been avoided. I also wonder how many mishaps that we are now paying for occurred as a result of President George W. Bush’s inarticulacy. Alas, I do not see an easy road to resolution; rather, I see the problem getting worse. I see it getting worse because of the difficulty in focusing.

I think the problem of inarticulacy is exacerbated by the “need” to multifunction. Few of us use a laser-like focus in our daily lives; we need to handle multiple things simultaneously and so we take a shotgun approach, hoping the “effective” zone of the spread is sufficient. We also reward the ability to multifunction, regardless of how effective the multifunctioning is. The old saying was to handle one problem at a time; today’s saying might better be handle all problems simultaneously and hope for the best.

Reversing the inarticulacy trend is probably impossible because too few people are knowledgable about how to be articulate — and because too many people would resist the necessary steps as being an infringement of their freedoms. Imagine if suddenly every parent was told that for their child to graduate from elementary school to middle school the child had to show proficiency in debating skills. (Of course, the first objection, and rightfully so, would be the teachers can’t show that proficiency so why should my Susan show it?) Part of the problem is the texting mindset. How do you overcome the fragmentary expression culture that it creates?

As articulation decreases and inarticulacy increases, I wonder what will become of our society 50 years from now. Would those of us educated in the 1950s and 1960s be able to communicate effectively in that future? Will the United States become a third-rate country because of dysfunctional communication skills? Will editors have a role in such an anything-goes-writing-milieu?

May 20, 2010

Editors & “Professional” Resources: A Questionable Reliance

Editors rely on lots of “professional” resources to guide their editorial decisions when working on a manuscript. In addition to dictionaries and word books, we rely on language usage guides and style manuals, among other tools. [To learn more about the professional editor’s (and my) bookshelf, see The Professional Editor’s Bookshelf.]

But it isn’t unusual for an author (or publisher) to have a different view of what is appropriate and desirable than the “professional” resources. And many editors will fight tooth and nail to make the client conform to the rules laid down in a style manual. As between language usage guides like Garner’s Modern American Usage and style manuals like The Chicago Manual of Style, I believe that editors should adhere to the rules of the former but take the rules of the latter with a lot of salt.

The distinction between the two types of manuals is important. A language manual is a guide to the proper use of language such as word choice; for example, when comprise is appropriate and when compose is appropriate. A style manual, although it will discuss in passing similar issues, is really more focused on structural issues such as capitalization: Should it be president of the United States or President of the United States? Here’s the question: How much does it matter whether it is president or President?

When an author insists that a particular structural form be followed that I think is wrong, I will tell the author why I believe the author is wrong and I will cite, where appropriate, the professional sources. But, and I think this is something professional editors lose sight of, those professional sources — such as The Chicago Manual of Style (CMOS) and the Publication Manual of the American Psychological Association — are merely books of opinion. Granted we give them great weight, but they are just opinion. And it has never been particularly clear to me why the consensus opinion of the “panel of experts” of CMOS is any better than my client’s opinion. After all, isn’t the key clarity and consistency not conformity to some arbitrary consensus.

If these style manuals were the authoritative source, there would only be one of them to which we would all adhere; the fact that there is disagreement among them indicates that we are dealing with opinion to which we give credence and different amounts of weight. (I should mention that if an author is looking to be published by a particular publisher whose style is to follow the rules in one of the standard style manuals, then it is incumbent on the editor to advise the author of the necessity of adhering to those rules and even insisting that the author do so. But where the author is self-publishing or the author’s target press doesn’t adhere to a standard, then the world is more open.)

It seems to me that if there is such a divergence of opinion as to warrant the publication of so many different style manuals, then adding another opinion to the mix and giving that opinion greater credence is acceptable. I am not convinced that my opinion, or the opinion of CMOS, is so much better than that of the author that the author’s opinion should be resisted until the author concedes defeat. In the end, I think but one criterion is the standard to be applied: Will the reader be able to follow and understand what the author is trying to convey? (However, I would also say that there is one other immutable rule: that the author be consistent.) If the answer is yes, then even if what the author wants assaults my sense of good taste or violates the traditional style manual canon, the author wins — and should win.

The battles that are not concedeable by an editor are those that make the author’s work difficult to understand and those of incorrect word choice (e.g., using comprise when compose is the correct word).

A professional editor is hired to give advice. Whether to accept or reject that advice is up to the person doing the hiring. Although we like to think we are the gods of grammar, syntax, spelling, and style, the truth is we are simply more knowledgeable (usually) than those who hire us — we are qualified to give an opinion, perhaps even a forceful or “expert” opinion, but still just an opinion. We are advisors giving advice based on experience and knowledge, but we are not the final decision makers — and this is a lesson that many of us forget. We may be frustrated because we really do know better, but we must not forget that our “bibles” are just collections of consensus-made opinion, not rules cast in stone.

If they were rules cast in stone, there would be no changes, only additions, to the rules, and new editions of the guides would appear with much less frequency than they currently do. More importantly, there would be only one style manual to which all editors would adhere — after all, whether it is president or President isn’t truly dependent on whether the manuscript is for a medical journal, a psychology journal, a chemistry journal, a sociology journal, or a history journal.

Style manuals serve a purpose, giving us a base from which to proceed and some support for our decisions, but we should not put them on the pedestal of inerrancy, just on a higher rung of credibility.

April 9, 2010

On Words: Jim Crow

Last week I came across Jim Crow in two different magazines: the first was in the current issue of American Heritage and then in the current week’s The Economist. Jim Crow is not an unknown or rarely used term. It is commonly found in American history books dealing with slavery and segregation and is found in magazine articles discussing segregation, the civil rights movement, and the history of racism. I understand what it means (systematic discrimination against and segregation of blacks, especially as practiced in the southern United States after the Civil War and until the mid to late 20th century) and that it is an epithet reserved for the racial group being discriminated against. But I never knew its origins.

Jim Crow was the stage name of a black minstrel character in a popular song and dance act performed by Thomas Rice about 1835. Rice was known as the “father of American minstrelsy.” Following Rice, other performers performed the Jim Crow character.

The song on which Rice’s act was based first appeared in an 1828 play called Jim Crow. The play’s song had the refrain “My name’s Jim Crow, Weel about, and turn about, And do jis so.” Rice’s version used the refrain “Wheel about and turn about and jump Jim Crow.” The song was so popular that newspapers and reviews in the 19th century often referred to it; for example, the Boston Transcript (March 25, 1840) wrote: “Tell ’em to play Jim Crow!” In 1926, the New York Times (December 26) wrote: “From ‘Old Jim Crow’ to ‘Black Bottom,’ the negro dances come from the Cotton Belt, the levee, the Mississippi River, and are African in inspiration.” The 1849 Howe Glee Book stated: “Toe and heel and away we go. Ah, what a delight it is to know De fancy Jim Crow Polka.”

Perhaps the musical origins were not innocent, but they did not carry the malice of subsequent uses, particularly as Jim Crow was used following Reconstruction after the Civil War.

The first recorded use of the word crow in its derogatory sense was by James Fenimore Cooper in his 1823 book The Pioneers, in which he used crow as a derogatory term for a black man.

One of the earliest uses of Jim Crow as a derogatory term not associated with the song or the minstrel act, was in 1838, when “Uncle Sam” in Bentley’s Miscellany wrote: “Don’t be standing there like the wooden Jim Crow at the blacking maker’s store.” And one of the earliest direct, no mistake about, uses of Jim Crow as a racist term was in the Playfair Papers (1841): “A portmanteau and carpet bag…were snatched up by one of the hundreds of nigger-porters, or Jim Crows, who swarm at the many landing-places to help passengers.” In 1842, Jim Crow car meant a railroad car designated for blacks. Harriet Beecher Stowe, in Uncle Tom’s Cabin (1852), wrote: “I thought she was rather a funny specimen in the Jim Crow line.”

But Jim Crow as a political term came into its own following Reconstruction. The Nation of March 17, 1904, reported that “Writing of the ‘Jim Crow’ bills now before the Maryland Legislature, the Cardinal expressed his strong opposition.” Two months later, the Richmond Times-Dispatch (May 25, 1904) reported: “The Norfolk and Southern Railroad was fined $300 to-day for violating the ‘Jim Crow’ law by allowing negroes to ride in the same car with whites.” The previous year, the New York Sun (November 29, 1903) reported that “The members of the committee have arranged with the parents of negro children to send them all to the Jim Crow school, thus entirely separating the white and negro pupils.”

The New World (1943) discussed Jim Crowism: “Negro soldiers had suffered all forms of Jim Crow, humiliation, discrimination, slander, and even violence at the hands of the white civilian population.” Time reported in 1948 (December 13) that “The Federal Council…went on record as opposing Jim Crow in any form.” And in what became a prescient statement, the Daily Ardmoreite of Ardmore, Oklahaoma, wrote on January 22, 1948: “What they call a ‘Jim Crow’ school cannot meet the federal court’s requirements for equality under the 14th amendment.” This was subsequently confirmed in Brown vs. Board of Education (1954) by the U.S. Supreme Court.

Many more examples are available of Jim Crow and its morphing from a popular song to a derogatory term. No history of the word can take away the harm and the hurt Jim Crowism inflicted on innocent people. Even today Jim Crow remains a blight on the reputation of the South. It wasn’t until the mid-1950s that Jim Crow began its death spiral. As each year passes, Jim Crow increasingly becomes a relic of history — where Jim Crowism belongs.

April 2, 2010

On Words: Clinch and Clench

In a recent New York Times article, U.S. Senator Robert Bennett (Republican of Utah) was quoted as saying “…it was through clinched teeth that they welcomed me.…” Immediately, I thought “you mean ‘clenched teeth.'” Although I was certain clench was correct, I decided I better check.

In olden days, way back in the 16th century and perhaps even earlier, clinch and clench were identical in usage terms — they meant and referred to the same thing. Clench, a verb, can trace its roots to about 1250 and to clenchen from The Owl and the Nightingale. Clenchen developed from the Old English beclencan, meaning to hold fast, and has Germanic roots (i.e., klenkan in Old High German and klenken in Middle High German, both of which meant to tie, knot, or entwine).

Clinch came into being about 1570 as a variant of clench, as a verb meaning fasten firmly. Approximately 60 years later, the noun clinch, meaning to settle decisively (the figurative sense) came into use. Clincher took a sidetrack; originally it was a noun (1330) to describe a worker who put in clinching nails. The first recorded use of clincher as meaning a conclusive argument or statement was in 1737.

Clinch became Americanized in the 19th century to mean the sense of a struggle at close quarters (1849) and morphed to mean a tight fighting grasp (1875). As its history shows, the general sense occurs early in English, but the modern technical use is American.

Along the way, clinch and clench became differentiated. In American usage, clinch became figurative and clench became physical. As Bryan Garner (Modern American Usage) puts it: “Hence you clinch an argument or debate but you clench your jaw or fist.” I have been unable to identify either the point at which usage shifted or any sources that can identify the shift. It isn’t clear to me the basis for Garner’s statement except that it comports with my understanding of the terms.

Even so, it isn’t clear from the dictionaries or from usage that Senator Bennett was clearly wrong in his use of clinch rather than clench. I concede that clench sounds better, sounds more correct, to my ear, and if I were his speechwriter, clench would be the word I would have chosen.

If you have any additional information on the separation of clinch and clench, particularly in the American lexicon, I would appreciate your sharing it with me.

March 12, 2010

On Words: Is the Correct Word Important?

One facet of a professional editor’s work is to help an author choose the correct word to convey the author’s meaning. I do not mean choosing between homonyms (e.g., seams vs. seems), but rather helping the author communicate with increased precision.

This is less problematic in fiction than in nonfiction, although it does have ramifications in fiction, too. I doubt that it matters much whether a character in a novel believes, thinks, or feels something; that is, use of any of the words is sufficient to convey the meaning intended. But in nonfiction, shades do matter and precision is more important.

Consider feel. Authors often use feel as if it were synonymous with think or believe. It is not unusual to see a construction such as: “The authors feel that a difference of 0.2 standard deviations is insignificant.” But the authors do not really feel this, they believe or think it. Yet many people accept feel as proper usage in this construction. Does it matter? Yes!

Feel is a less intense expression of think and believe, a weak substitute for the correct expression. Consequently, using feel as a substitute for think or believe is to weaken the argument. Feel‘s semantic lineages are touch and sensation; its Old English root is felan. In contrast, believe‘s root is the late Old English belyfan and think‘s root is the Old English thencan. Three different roots to express three different meanings.

Choosing the right word ensures the correct tone and emphasis; it adds credibility because the choice strengthens the argument being made. Conversely, choosing the wrong word or a lesser word to convey an idea weakens the argument. Consider the effect of using feel, believe, and think in propounding a theorem.

The reader who encounters “I feel this theorem is correct” cannot precisely determine how correct the theorem is in the author’s view. Feel is so weak that it is a straddling word — that is, a word that straddles the gap between is and is not, may and may not, fact and fiction, and the like — but without clarity as to whether it leans more to the is and less to the is not or vice versa. Feel is equidistant, giving the author the most wiggle room.

Believe is less weak in the construction “I believe this theorem is correct.” Yet, it too is a straddling word that provides wiggle room. What the author is really saying is that the theorem may or may not be correct but on the continuum between may and may not, the author is more on the may than the may not side.

To say, however, “I think the theorem is correct” is to firmly come down on the is, may, fact side of the continuum. The author is telling the reader that the author has a high degree of certainty of the correctness of the position — not an absolute certainty, but a high degree.

Is this distinction important? Yes, albeit less important in fiction and greatly more important in nonfiction writing. Think of a medical diagnosis: Would you prefer to have a less certain or more certain diagnosis? Would you prefer the doctor to be less certain or more certain about the efficacy of a treatment protocol?

Similarly, there is increasing misuse of that and which. That is used in a restrictive clause, whereas which reflects a nonrestrictive clause. And each reflects a different meaning and requires a different punctuation. The nonrestrictive clause is separated from the rest of the sentence by a preceding comma. The which clause, as a nonrestrictive clause, provides supplemental information, information that the sentence could omit without harm based on the context presented by the material that precedes and follows the sentence.

Bryan Garner, in his Garner’s Modern American Usage (3rd ed., 2009, p. 806) provides the following example sentences:

  • “All the cars that were purchased before 2008 need to have their airbags replaced.”
  • “All the cars, which were purchased before 2008, need to have their airbags replaced.”

A careful read of the sentences indicates the distinction. Yet, making the choice between that and which, like making the choice between feel, believe, and think, can be the difference in communication or miscommunication.

Between used with numbers is another good example of the effect of word choice. When we write between 5 and 10, do we mean 6, 7, 8, and 9 or 5, 6, 7, 8, 9, and 10? Correctly it is the former, but many authors intend the latter. If the latter is meant, it is more precise to write from 5 to 10 as that includes the beginning and ending numbers. Is the distinction important? Think about a book describing space travel and the number of years it would take to get from point A to point B. If I write between 5 and 10, the reader can deduce that it will take 6, 7, 8, or 9 years, whereas if I write from 5 to 10, the reader can deduce it will take as few as 5 years or as many as 10 years or some number of years between 5 and 10. The latter conveys the broader range, the former conveys the narrower range.

A professional editor helps the author make these correct word choices. Where the correct choice matters, it can be the difference between clear communication and miscommunication.

March 11, 2010

On Words: Filibuster

I’m in frustrated-angry mode. My local power utility (read monopoly) has raised its rates twice in 12 months, and has applied for a third rate increase. My Internet/TV/telephone package rate has gone up because they added cable channels that I’m not interested in ever watching (truth be told, I don’t ever watch TV and have the cable TV only because my wife insists).

But the final blow came in the mail from my health insurance company: our rates are going up 25%. The excuses given include higher New York State taxes (mine have gone up significantly, too), increased use of health care services by others, federal expansion of COBRA, a large number of H1N1 flu cases, and federal expansion of “large group” mental health and substance abuse coverage (we are a small group).

Then I read the latest on healthcare reform in Washington, DC — the movement that appears to be going nowhere fast — and how a filibuster is threatened should a bill come to the floor of the Senate. Setting aside my frustration with politicians who think first about lining their pockets and last about their constituents, I wondered about the origins of the word filibuster. There is a certain Kafkaesqueness, a certain Alice-in-Wonderland-ness about the word that intrigues me.

The American Heritage Dictionary of the English Language (4th ed.) defines filibuster as “1a. The use of obstructionist tactics…for the purpose of delaying legislation.…2. An adventurer who engages in a private military action in a foreign country” The first definition is what we commonly understand, but the second is closer to the word’s roots.

In early American English, the spellings were fillibustier and flibustier. Sometime in the 1850s the spelling changed to the current filibuster. An early English spelling (16th century and earlier) was flibutor, which was borrowed from the Dutch word for freebooter (vrijbuiter); an earlier version of flibutor was fribustier, confusing its origins.

Filibuster is French in origin, coming from flibustier, referring to pirates who pillaged the Spanish West Indies during the 1700s. In the 1800s, the word’s origins shifted from French to the Spanish usage and meaning. Filibustero, the Spanish version, which also meant freebooter or common rover (as opposed to a buccaneer; buccaneers were French settlers who were hired as hunters by the Spanish. When they were later driven out, the buccaneers turned to plundering, thus morphing buccaneer‘s meaning from hunter to pirate), was used to describe Americans, primarily Texans, who incited insurrection against Spain in Latin America.

Probably the best-known filibusteros were those who joined Narcisso Lopez’s Cuban insurrection in 1850-1851, and those who followed William Walker’s insurrection against Sonora, Mexico (1853-54) and against Nicaragua (1855-58). As reported by the Lawrence (KS) Republican, June 2, 1857, “Walker, the filibuster, has been forced to capitulate.”

This sense of filibuster (freebooter, revolutionist, insurrectionist) remained in use for decades and was used to describe other persons whose tactics were similar to those of the American filibusters. For example, an article in Knowledge (1887) said: “What were the Normans…but filibusters? What were the Pilgrim Fathers but filibusters?” Columbus and William the Conqueror also were called filibusters. But this sense has, for the most part, faded away as the political sense has gained use, although it isn’t clear to me that this original sense isn’t an apt description of today’s filibusters.

One of the earliest uses of filibuster in the sense we think of it today, that is, as a tactic by a member of the legislative minority to impede action by the majority, was by the Portland Oregonian (February 5, 1853): “Filibustero principles do not appear to meet with much consideration from the southern members of congress.” In 1889, the Boston Journal (January 14) noted that “The surrender of legislative functions by the majority of the House and the carrying on of business…only by a humiliating ‘trreaty’ with a single determined filibuster is something entirely anomalous in a country…governed by majority action.”

Of course, in the early days of legislative filibustering, filibusters were required to speak — Jimmy Stewart’s “Mr. Smith Goes to Washington” in real life — and garnered little sympathy when they could no longer command the floor. As the New York Times (January 31, 1915) wrote: “The Senate sits…and the overwhelmed filibusters simply cannot talk.” Two weeks later, the New York Times (February 16) reported: “The Republicans will filibuster…against the cloture rule.” How little has changed in 95 years!

This action, speaking for the sole purpose of consuming time, was the required method prior to the Senate becoming a gentlemen’s club at taxpayer and citizen expense. Now the excuse is that Senators have other important business to attend (e.g., fundraising, violating ethics, lobbying against the interests of their constituents); so why waste time listening to endless speech making? The Congressional Record of February 11, 1890, noted that “A filibuster was indulged in which lasted…for nine continuous calendar days.” Just think — 9 days of legislative peace!

But there was a spark of humor in the annals of senatorial filibustering. Consider this Chicago Times (July 22, 1947) report of a filibuster: “You’re filibustering against the wrong bill, Senator–the resolution before the Senate is for adjournment.” Now if only the American voter could filibuster, perhaps we could put an end to Washington gridlock.

One final note: I am intrigued that both the act and the actor are called filibuster. Why is the actor not called filibusterer?

February 24, 2010

On Words: Believe and Know

Several events in the past few weeks suddenly converged in my mind, causing me to realize that in the discourse about ebooks, especiall about what constitutes fair ebook pricing, the unbridgeable divide is between believe and know.

The first events were discussions about ebooks and what constitutes fair pricing for an ebook. Three types of people participated in those discussions: those who admittedly had no direct knowledge of the costs involved in publishing an ebook, those who did have direct knowledge, and those who believed they knew. As is typical of such discussions, those who admitted not knowing were open to learning and the other two types were trying to teach. But between the teachers there was no room to compromise; those who believed they knew — the believers — simply would not consider or accept that believe and know are not synonymous, that there is a chasm between the two words.

Then came the New York Times Magazine article, “How Christian Were the Founders?”, which discussed the efforts by pressure groups in Texas to shape the secondary school curriculum by requiring textbooks to reflect their view of history. This pressure was previously applied to the science curriculum, the Kansas school board fight having made national headlines.

The article and the ebook fair-pricing discussions brought to mind this war between two words: belief and knowledge. The core problem in the discussions about both pricing and textbook content is the chasm between believe and know.

Believe, although having some slim foundation in evidence, signifies something unprovable (or perhaps less provable), and thus less firmly based in evidence, than know, whose foundation is firmly based on the provable and demonstrable. For example, we may believe there is minimal cost to creating an ebook of a pbook, but we do not know what that cost is — we can’t prove it or demonstrate it. The same concept holds true for any belief, whether economic, cultural, religious, scientific, or something else.

Unlike know, believe covers a wide range of credulity. Know is more constrained; its verity must be demonstrable. Believe needs no more than the statement “I believe” something to be true, leaving it to the listener to supply the factual base — no matter how slim or wobbly — for where to place the belief on the continuum that ranges from pure speculation to pure fact.

Believe denotes the acceptance of the truth or actuality of something, that it is real, even if it may not be real. For example, the belief that because an ebook is a digital file of the pbook, there is no cost to creating the ebook. Know, on the other hand, has its basis in experience rather than acceptance, such as the experience of smelling a rose. Having never smelled a rose, I could say that I believe the rose’s fragrance is similar to that of a skunk; but having smelled both a rose and a skunk, I could say I know that the rose’s fragrance is dissimilar to that of the skunk.

A belief statement might ultimately prove correct, but then believe would transform itself to know. The know statement, however, cannot be transformed from know to believe. Once I have smelled both the rose and the skunk, that I know doesn’t change. What I know might change, but not that I know.

Believe embraces the possibility of doubt: No matter how firmly one believes something, by describing that conviction as a belief, one ascribes some doubt, albeit infinitesimally small, as to the verity of that belief. In contrast, know doesn’t permit that possibility of doubt; it doesn’t permit any doubt: I believe the rose smells like a skunk, but I know it doesn’t.

It is the improper use of these two words that leads to the ongoing cultural wars that are reflected in the battles over what should and should not be taught in school and what is or is not a fair price for an ebook. Too many people equate believe with know. They are neither the same nor does each include the other. It is when believe transforms to know that fact is possible, but until that transformation occurs there is always some doubt.

Interestingly, know not only cannot transform to believe, but it cannot embrace believe as a component of itself. To do so would be to weaken know and impose that element of doubt that distinguishes it from believe. In this instance, know must stand aloof and by itself.

Would proper use and understanding of these words deflect any of the passionate discourse that surrounds “I believe” statements in the cultural and ebook pricing wars? I doubt it would matter. There are some things that we grasp and cannot let go, that are beyond believe and know in the sense of a willingness to transform from the former to the latter; after all, we invented these words as a method of describing those immutable beliefs and distinguishing between possibility and fact. But proper use and understanding might shine a different light on the divide and permit a coming closer together. Unlike conflicting knowledge, it is impossible to reconcile conflicting belief, which is why we can expect the question of what is fair ebook pricing to remain unresolvable.

January 28, 2010

Publishers vs. Editors & the Bottom Line: Readers are the Losers

In 1966, William Baumol and William Bowen described the economics of the performing arts. The point of their study was that some sectors of an economy have high labor costs because they tend not to benefit from increased efficiency. Baumol and Bowen illustrated this proposition using a 1787 Mozart string quintet: that quintet required 5 musicians and a set amount of playing time in 1787 and today still requires 5 musicians and the same amount of playing time.

Like Mozart’s quintet, there is a limited amount of efficiency that can be gotten in the editorial process. A 500-page manuscript still needs to be read page by page, paragraph by paragraph, sentence by sentence, word by word, when edited.

Years ago the reading was done on paper with pencil and editors used a limited number of markings to signify elements of the manuscript, such as a chapter title or a bulleted list. Today, the coding has become more complex and most manuscripts are read on a computer. But editing is still as labor intensive today as it was 25 or 50 or 100 years ago. Perhaps even more labor intensive as editors have assumed responsibilities that they didn’t have back then, such as removing author inserted styling. And some publishers now want editors to use XML codes and advanced, expensive software like InCopy. Editors are now doing much of the work that typesetters did as near ago as the 1980s, in addition to dealing with issues of grammar, spelling, syntax, and organization. (For a discussion of what an editor does, see Editor, Editor, Everywhere an Editor.)

Yet, unlike other labor-intensive professions such as nursing, garbage collection, and teaching, wages for editors haven’t grown; instead, they have declined. (Imagine paying a nurse or a teacher today what they were paid in 1995, let alone what they were paid in 1985 or 1975.) In fact, in contrast to what would be expected in the normal course of events, publishers have decided to make editors their sacrificial lambs on the altar of quarterly profits and are now paying rates that are the same as they paid in 1984 or, in some cases, less, while demanding that more work be done in a shorter timeframe.

One book packager (a packager is a company hired by a publisher to handle most or all aspects of the editorial and production phases of publishing a book) recently solicited experienced American editors to do high-quality editing (and wanted a no competition agreement, too!) in the medical field. High-quality medical editing is slow and careful, with editing at a rate of 3 to 5 manuscript pages an hour the norm, especially if the mansucript requires a “heavy” edit. In exchange for the editor’s effort, the packager offered a rate of 80 cents a page, or $2.40 to $4.00 an hour — not even minimum wage let alone a wage commensurate with the skill and knowledge levels required for this kind of editing. Would you want your doctor to rely on such a low-quality book to prescribe your medications?

Not all publishers or packagers pay such a miserly sum, but this packager doesn’t stand alone.  In fact, this packager is surrounded by myriad other packagers and publishers who pay poverty-inducing wages. Such low offers are increasingly being seen by American professional editors.

Who loses when editors are hired at such poverty-inducing rates? The book buyer loses because it means that an unskilled editor will be hired to do a very cursory editing job. When you buy a book that is riddled with errors, an increasingly common occurrence these days, put the blame squarely where it belongs: on the shoulders of the publishers who have lost any sense of pride in the quality of their books.

As with any profession, editors deserve a fair wage for their skill and knowledge, with specialized skills deserving higher compensation. Publishers have lost the book buyer’s trust because of high price with low quality. One way to regain buyer’s trust is to raise quality. To raise quality, a publisher needs to hire experienced, skilled editors at a fair rate of compensation.

The hue and cry for quarterly profits doesn’t mean that costs should be contained regardless of what is sacrificed. Rather, it means that publishers must change their business model and become more efficient in those areas where efficiencies can be obtained. Editing is not one of those areas because a lower price for editing does not equate with higher efficiency or quality. Editing is labor intensive — a computer cannot take over an editor’s work. Someday publishers and packagers will realize that false economies are a sure path to extinction.

Next Page »

Create a free website or blog at WordPress.com.