An American Editor

April 2, 2010

On Words: Clinch and Clench

In a recent New York Times article, U.S. Senator Robert Bennett (Republican of Utah) was quoted as saying “…it was through clinched teeth that they welcomed me.…” Immediately, I thought “you mean ‘clenched teeth.'” Although I was certain clench was correct, I decided I better check.

In olden days, way back in the 16th century and perhaps even earlier, clinch and clench were identical in usage terms — they meant and referred to the same thing. Clench, a verb, can trace its roots to about 1250 and to clenchen from The Owl and the Nightingale. Clenchen developed from the Old English beclencan, meaning to hold fast, and has Germanic roots (i.e., klenkan in Old High German and klenken in Middle High German, both of which meant to tie, knot, or entwine).

Clinch came into being about 1570 as a variant of clench, as a verb meaning fasten firmly. Approximately 60 years later, the noun clinch, meaning to settle decisively (the figurative sense) came into use. Clincher took a sidetrack; originally it was a noun (1330) to describe a worker who put in clinching nails. The first recorded use of clincher as meaning a conclusive argument or statement was in 1737.

Clinch became Americanized in the 19th century to mean the sense of a struggle at close quarters (1849) and morphed to mean a tight fighting grasp (1875). As its history shows, the general sense occurs early in English, but the modern technical use is American.

Along the way, clinch and clench became differentiated. In American usage, clinch became figurative and clench became physical. As Bryan Garner (Modern American Usage) puts it: “Hence you clinch an argument or debate but you clench your jaw or fist.” I have been unable to identify either the point at which usage shifted or any sources that can identify the shift. It isn’t clear to me the basis for Garner’s statement except that it comports with my understanding of the terms.

Even so, it isn’t clear from the dictionaries or from usage that Senator Bennett was clearly wrong in his use of clinch rather than clench. I concede that clench sounds better, sounds more correct, to my ear, and if I were his speechwriter, clench would be the word I would have chosen.

If you have any additional information on the separation of clinch and clench, particularly in the American lexicon, I would appreciate your sharing it with me.

April 1, 2010

The Professional Editor’s Bookshelf

I have been a professional editor for more than 25 years and during those years I have purchased, read, and used numerous references. Even now I look for additional language reference books to buy (I have on order, for example, An Analytic Dictionary of English Etymology: An Introduction by Anatoly Liberman).

There is no list of must-have reference books that every professional editor must own or have immediate access to, with the possible exception of standard dictionaries; which books should be part of an editor’s reference library depends a great deal on the types of manuscripts the editor works on and the type of editing performed (by which I mean whether one does developmental editing, copyediting, or both).

One book every editor should have (in addition to dictionaries) is the appropriate style manual. There are many style manuals available, even news organizations like the New York Times and Associated Press have style manuals. Sometimes the required style manual is nothing more than the grammar and style rules created by the client, but usually it is one of the standard manuals, such as the Publication Manual of the American Psychological Association, The Chicago Manual of Style, the MLA (Modern Language Association) Style Manual and Guide to Scholarly Publishing, the AMA (American Medical Association) Manual of Style, and the Council of Science Editor’s Scientific Style and Format, to name but a few. It is the style manual that is the arbiter of the rules to be applied to a manuscript, for example, how a reference is to be styled, how a quotation is to be delineated, whether or not serial commas should be used, whether or not prefixes should be hyphenated or closed up, whether or not a phrase should be hyphenated, etc.

In addition to the appropriate style manual, an editor’s bookshelf must contain at least one dictionary, although many editors will have several. Two of my favorite dictionaries are The American Heritage Dictionary of the English Language and Merriam-Webster’s Collegiate Dictionary. Although one would think that all dictionaries are the same, they are not, and clients often have a preference. Along with a standard language dictionary, specialized dictionaries are needed. For example, medical editors often own several medical specialty dictionaries, such as Stedman’s Medical Dictionary, Dorland’s Illustrated Medical Dictionary, and the APA Dictionary of Psychology, in addition to the standard English language dictionaries.

My bookshelf also includes “word” books, that is, books that are lists of accepted words and their spelling for a particular specialty subject area. Because I do a lot of medical editing, I have numerous medical word books. Specialty areas, like medicine, also require specialty reference books. My medical library, for example, includes several drug reference manuals, drug interaction guides, and medical test guides. And because a lot of my specialty work also includes chemical compounds, my library also includes chemical reference books like The Merck Index.

But my bookshelf also includes books devoted to language usage, such as Garner’s Modern American Usage, The American Heritage Guide to Contemporary Usage and Style, and Merriam Webster’s Dictionary of English Usage. These are the books that go into detailed explanation of when, for example, which is correct and the difference between farther and further in usage.

Usage books only tell part of the story. Another part is told in a word or phrase’s history (etymology). Some of this information is available in the standard dictionary, especially the Oxford English Dictionary and the American Heritage Dictionary of the English Language, as well as from specialty books like A Dictionary of Americanisms, Chambers Dictionary of Etymology, and The Oxford Dictionary of Word Histories. These resources are valuable in determining whether a word or phrase are being used appropriately.

Also useful are texts that help an editor analyze the roots and origins of a word, especially when an author uses a wholly unfamiliar word, including one not found in the standard language references, or creates a new word. Composition of Scientific Words is particularly helpful with science words and the Word Parts Dictionary is useful with standard English words.

In addition to books about words, a professional editor’s bookshelf includes books about grammar. Grammar books also address the correct word issue, but the focus is more on correct sentence structure, for example the restrictive versus the nonrestrictive clause, use of commas, passive versus active voice, and the like. I suspect many editors make use of The Gregg Reference Manual when grammar questions arise.

Some editors rely on online resources in this Internet Age. I find that troublesome to the extent that there is no assurance of reliability or accuracy. I know the source of my Merriam-Webster’s Collegiate Dictionary, but have no idea of the source for or accuracy of a Wikipedia article. Having grown up in the print age, I am not comfortable relying on the Internet as the source of my information. But making use of online resources is also an important part of an editor’s job; the key is knowing which resources to accept and which to reject. A professional editor can knowledgeably make that decision.

Why is the editor’s bookshelf important? Because it helps separate the professional editor from the amateur. The professional editor has a deep interest in language and how language is used. The professional editor wants to improve communication between the author and the reader. The professional editor devotes significant time and resources to mastering language so that when a manuscript leaves the editor’s hands, it is better communicates the author’s message. Nonprofessional editors do not make the investment nor work to master the language skills that are needed.

The difference between a professional and a nonprofessional editor can be the difference between clear communication and miscommunication of an author’s message. The comprehensiveness of the editor’s bookshelf, the editor’s resources, is a clue to the editor’s professionalism, and something that every author should be interested in.

March 29, 2010

Footnotes, Endnotes, & References: Uses & Abuses

I read a lot of nonfiction books, both in my work and for pleasure, and one of the most annoying things to me is improper thought given to footnotes, endnotes, and references.

Many years ago, an academic client told me, in response to my question about why a 50-manuscript page chapter had nearly 1,000 references — a bit of overkill, I thought — that in his academic circles, if he wanted to move up the ladder his writings had to have lots of references. He went on to say that it was not unusual for people to look at the quantity rather than the quality of the references.

References do have a legitimate purpose, but this comment made wonder — and I continue to wonder — about notes (notes being the inclusive term for footnotes, endnotes, and references). Granted, I am as guilty as my client’s academic peers in that if I see a book on a heavy subject that purports to be the comprehensive study of the subject to date that has only a few references, I wonder about the quality of the work. On the other hand, if I find every other word bearing a reference, I wonder if any real effort was placed in the writing; is there any original material to be found between the covers? There is a fine line of too much and too little referencing.

There is also the problem of quality vs. quantity, especially when many of the notes cite references that are citing other references, that is, a cite of a cite of a cite or the syndrome of inconsequential citation. If Jones cites Smith who cites Waterloo for a proposition espoused by Spinster, and Jones hasn’t verified (a) that Spinster actually espoused the proposition, (b) that Waterloo has correctly cited and attributed to Spinster (as, e.g., in correctly quoting Spinster), and (c) that Smith is correctly citing Waterloo, of what value is the cite other than to take up space? And if Jones is going to go to the trouble to verify the sources, as Jones should, then why not bypass Smith and Waterloo and directly cite Spinster?

Referencing is necessary in serious academic work. I don’t dispute that. But how it is done is problematic. Is it more important that I note the references or the text? And what about footnotes (and endnotes) that provide their own discussion or explanation of the material? I still shudder when I come across a footnote that is many paragraphs long and has umpteen cites to support just the footnote. I have always been of the view that if it is important enough to be in an explanatory note, it should be incorporated into the main text.

Unlike end-of-book references, footnotes and endnotes are distractions. They interrupt the reading flow. If they give no more information than a reference cite, why distract the reader from the text with a callout to the reference? If they provide additional details that the reader should be made aware of, why not incorporate that information in the text body? If it isn’t important enough to be incorporated into the main text, perhaps it is not important enough to interrupt the reader’s concentration on the text.

Endnotes are worse than footnotes because they prevent the reader from easily scanning the note to see how worthwhile interrupting reading the text to read the notes would be. One needs to locate the endnote by physically turning to a new location in the book. How frustrating to get to the endnote to discover that in its entirety it reads: Ibid. That bit of information was certainly worth interrupting concentration on the text! Noting distracts the reader, usually for no intellectual gain.

The problem is academia. Too much emphasis is placed in unimportant things. It is the form rather than the substance that dominates. Not so many years ago, in a discussion with academics at a local college, it was made clear that if someone wanted to get tenured at the college, they had to write a peer-reviewed book that was published by a publisher from an approved list, which list was in rank order; that is, the closer the publisher was to the top, the better the chances of obtaining tenure. It was also made clear that there were specific expectations regarding noting, including a minimum number of expected notes.

It seems to me that the communication of knowledge should be the primary focus of an academic book. Scholarship should be judged on the information conveyed within the main body, not the number of times concentration is interrupted. In fact, interruptions should be minimized and minimal interruptions should be rewarded.

Readers assume that if a work is cited in a note or reference that the book’s author has actually read the cited work rather than relied on someone else’s summary of the work. Reader’s also assume that the cited work actually says what is claimed or relates to the material for which it is being cited. Are these valid assumptions? I know that as a reader I do not have either the time or the desire to check each cite for accuracy — neither for accuracy of the cite itself or for the content for which it is cited; I wonder how many people actually do check each and every cite or are we simply impressed and overwhelmed by the number of cites?

I think that scholarship can be better served by more effort placed in writing the main text, fewer footnotes (and no endnotes), and a comprehensive reference list at the end of the book that is divided into two parts: references relied on for the book and recommended additional sources of information. If the author has a message worth communicating, it is worth not interrupting and worth not going down the side roads to which footnotes and endnotes often lead. Occasional footnotes, even lengthy explanatory ones, are appropriate, but it is inappropriate, in my thinking, to bombard the reader with hundreds of distractions.

Another questionable practice as regards footnotes, endnotes, and references is the citing of online material. Here today, gone tomorrow is, unfortunately, the reality of a lot of online material. Unlike a book that gets stored in libraries for future generations to use, online material often shifts or disappears and is difficult to verify. Today’s valid URL is tomorrow’s Not Found error.

When I see a book that relies heavily on online sources, I wonder about the content. Online material isn’t always scrutinized for verity, making it highly suspect. Along with overnoting and poor noting, relying on online sources is not a sign of quality; rather, it is a sign of quantity.

Something authors should keep in mind: The purpose of writing a nonfiction book is to advance knowledge, spread it around; it is not to create a book that simply sits on the buyer’s bookshelf. It is better to be remembered for what one wrote than for what one noted.

March 26, 2010

On Words: Panjandrum

I hadn’t read anything that used the word panjandrum in decades. Truth be told, I’d forgotten what it means, even that it exists, until a couple of weeks ago when I read the following in The Economist in an article about President Obama’s chief of staff, Rahm Emanuel:

Mr Emanuel is famous for being the president’s pugnacious panjandrum.…

One thing I can say about The Economist, it doesn’t mince language. By reputation, not by pronouncement, it is the newspaper/magazine, and it tends to choose words to describe events that one rarely encounters in daily American English. Panjandrum is just the most recent example.

Probably the best place to start is with its meaning. I confess that upon reading panjandrum I immediately reached for my dictionary. According to Merriam-Webster’s Collegiate 11e, panjandrum means “a powerful personage or pretentious official.” Well, there’s no doubt about Rahm Emanuel’s power or pretentiousness.

The word comes from Grand Panjandrum, an invented phrase in a nonsense passage written in 1755 by Samuel Foote, an English actor and dramatist, to test the vaunted memory of the actor Charles Macklin, who claimed he could repeat anything after hearing it once. The memory-testing passage was:

So she went into the garden to cut a cabbage-leaf to make an apple-pie; and at the same time a great she-bear, coming down the street, pops its head into the shop. What! No soap? So he died, and she very imprudently married the Barber: and there were present the Picninnies, and the Joblillies, and the Garyulies, and the great Panjandrum himself, with the little round button at top; and they all fell to playing the game of catch-as-catch-can, till the gunpowder ran out at the heels of their boots.

I don’t know if Macklin lived up to his boast, but this is surely a passage to test one’s short-term memory!

Nat Hentoff used the word to describe “a panjandrum of the publishing business.” Salman Rushdie used the term in his novel, The Satanic Verses: “Look: there she is, down there, sitting back like the Grand Panjandrum.” George E. Farrow, in his Dick, Marjorie and Fidge: A Search for the Wonderful Dodo, wrote, for example,: “Panjandrum is a very severe one” and “I am the Ambassador Extraordinary of his Magnificence the little Panjandrum, and you tell me that you have seen the Dodo; that is enough.” E. Cobham Brewer wrote, in his Character Sketches of Romance, Fiction and the Drama (1892), “The squire of a village is the Grand Panjandrum, and the small gentry the Picninnies, Joblillies, and Garyulies.” And Jessie Hubbell Bancroft, in her Games for the Playground, Home, School and Gymnasium (1922), listed one of the instructions as: “One player is chosen to be the Panjandrum, an important personage requiring a body guard.”

In the Dictionary of Phrase and Fable (1898), also by E. Cobham Brewer, panjandrum was defined as “The Grand Panjandrum. A village boss, who imagines himself the ‘Magnus Apollo’ of his neighbours.”

In the 1922 Roget’s International Thesaurus of English Words and Phrases, the word was placed amidst more sinister words: “…TYRANT, disciplinarian, precisian, martinet, stickler, bashaw, despot, the Grand Panjandrum himself, hard master, Draco, oppressor, inquisitor, extortioner…”

Randolph Caldecott (1846-1886), a great 19th century children’s book illustrator and author and for whom the Caldecott Prize is named, illustrated a book titled The Great Panjandrum Himself (Samuel Foote was the named author although Foote had died in 1777) and authored and illustrated The Panjandrum Picture Book.

Panjandrum was also a Broadway musical by Woolson Morse and J. Cheever Goodwin. It had a short run by today’s standards, opening May 1, 1893 and closing in the following September.

But panjandrum never dies. In World War II, panjandrum was a massive, rocket-propelled, explosive-laden cart designed by the British military. It was one of a number of highly experimental projects developed by the British Admiralty’s Directorate of Miscellaneous Weapons Development in the final years of the war. The cart never was used in the war. Tom Wolfe mentioned this project in his 1979 book The Right Stuff. On June 5, 2009, the Daily Mail ran an article about the panjandrum experiment and the online version includes a video of the Great Panjandrum (reconstructed) in action.

Great Panjandrum also appears in Jasper Fforde’s 2003 novel The Well of Lost Plots, featuring literary detective Thursday Next. The Great Panjandrum is the leader of BookWorld, where the action takes place.

So even though I haven’t seen the word used in years, it obviously has been, albeit sporadically. Now that I have reencountered it, I think I will try to incorporate it into my vocabulary, especially when discussing politics. After all, a nonsensical word seems a most appropriate appellation to use when discussing politicians. And I will watch for its next appearance in my readings.

March 19, 2010

On Words: Almighty Dollar

How many times have we heard or read the phrase “the almighty dollar”? We know what it means, the dollar is the object of universal devotion on the part of Americans. But where did the phrase come from?

It appears that Washington Irving is the coiner of this particular phrase, although it could be argued that Ben Jonson, a contemporary of Shakespeare and himself an English dramatist, is the coiner because he had used “almighty gold” in 1616 in his Epistle to Elizabeth, Countess of Rutland (“The flattering, mighty, nay, almighty gold”).

Washington Irving coined the phrase in the November 12, 1836, issue of Knickerbocker magazine, writing in his story “The Creole Village”: “In a word, the almighty dollar, that great object of universal devotion throughout our land, seems to have no genuine devotees in these peculiar villages.” Irving, a year later, in the midst of the financial panic engulfing America, wrote the dollar is “daily becoming more and more an object of worship.”

The almighty dollar found itself part of the social commentary in “The Wants of Social and Domestic Life” (Genesee Farmer, November 1852), where it was written, “In the eagerness of our pursuit of the almighty dollar, how prone we are to forget the wants, and neglect the duties of domestic life.” In the  story “The Garden” (Blackwood’s Edinburgh Magazine, February 1853), we find: “Their pursuit of the all-mighty dollar is too passionate and intense to admit of interruption from the recreations of horticulture.” In 1857, the Sacramento Phoenix wrote, “In dreams they nod, and mutter ‘God,’ but mean the Almighty dollar.”

Edward Bulwer-Lytton coined the related phrase “pursuit of the almighty dollar,” which he used in his 1871 novel The Coming Race.

The almighty dollar also has been alluded to in a variety of ways, for example: In 1855, the Monterey Sentinel wrote, “To-day is ‘steamer day’ every body is astir — the immortal dollar is jingling.” Beadle’s Missourian (1866) wrote: Even the Indian…is moved by the almighty dollar, or, rather, by the almighty half-dollar, for that is the only denomination of specie in which he will receive payment.” The Las Vegas (NM) Gazette (1884) commented: The “street car driver made [him] walk up to the front of the car like a little man and deposit the almighty nickel in the box.”

Newsweek (January 5, 1948) noted, “Something had happened to his standard of value — the almighty dollar — which deeply disturbed him.” And Time (June 16, 1947), said “There is a limit to the sacrifices some Britons would make for the sake of the almighty greenback.”

Today, as our politicians pursue reelection contributions, we can thank Washington Irving for identifying the nearly 200-year-old worship of the almighty dollar. And for those who need more spiritual sustenance, perhaps The Church of the Almighty Dollar is looking the place for you!

March 12, 2010

On Words: Is the Correct Word Important?

One facet of a professional editor’s work is to help an author choose the correct word to convey the author’s meaning. I do not mean choosing between homonyms (e.g., seams vs. seems), but rather helping the author communicate with increased precision.

This is less problematic in fiction than in nonfiction, although it does have ramifications in fiction, too. I doubt that it matters much whether a character in a novel believes, thinks, or feels something; that is, use of any of the words is sufficient to convey the meaning intended. But in nonfiction, shades do matter and precision is more important.

Consider feel. Authors often use feel as if it were synonymous with think or believe. It is not unusual to see a construction such as: “The authors feel that a difference of 0.2 standard deviations is insignificant.” But the authors do not really feel this, they believe or think it. Yet many people accept feel as proper usage in this construction. Does it matter? Yes!

Feel is a less intense expression of think and believe, a weak substitute for the correct expression. Consequently, using feel as a substitute for think or believe is to weaken the argument. Feel‘s semantic lineages are touch and sensation; its Old English root is felan. In contrast, believe‘s root is the late Old English belyfan and think‘s root is the Old English thencan. Three different roots to express three different meanings.

Choosing the right word ensures the correct tone and emphasis; it adds credibility because the choice strengthens the argument being made. Conversely, choosing the wrong word or a lesser word to convey an idea weakens the argument. Consider the effect of using feel, believe, and think in propounding a theorem.

The reader who encounters “I feel this theorem is correct” cannot precisely determine how correct the theorem is in the author’s view. Feel is so weak that it is a straddling word — that is, a word that straddles the gap between is and is not, may and may not, fact and fiction, and the like — but without clarity as to whether it leans more to the is and less to the is not or vice versa. Feel is equidistant, giving the author the most wiggle room.

Believe is less weak in the construction “I believe this theorem is correct.” Yet, it too is a straddling word that provides wiggle room. What the author is really saying is that the theorem may or may not be correct but on the continuum between may and may not, the author is more on the may than the may not side.

To say, however, “I think the theorem is correct” is to firmly come down on the is, may, fact side of the continuum. The author is telling the reader that the author has a high degree of certainty of the correctness of the position — not an absolute certainty, but a high degree.

Is this distinction important? Yes, albeit less important in fiction and greatly more important in nonfiction writing. Think of a medical diagnosis: Would you prefer to have a less certain or more certain diagnosis? Would you prefer the doctor to be less certain or more certain about the efficacy of a treatment protocol?

Similarly, there is increasing misuse of that and which. That is used in a restrictive clause, whereas which reflects a nonrestrictive clause. And each reflects a different meaning and requires a different punctuation. The nonrestrictive clause is separated from the rest of the sentence by a preceding comma. The which clause, as a nonrestrictive clause, provides supplemental information, information that the sentence could omit without harm based on the context presented by the material that precedes and follows the sentence.

Bryan Garner, in his Garner’s Modern American Usage (3rd ed., 2009, p. 806) provides the following example sentences:

  • “All the cars that were purchased before 2008 need to have their airbags replaced.”
  • “All the cars, which were purchased before 2008, need to have their airbags replaced.”

A careful read of the sentences indicates the distinction. Yet, making the choice between that and which, like making the choice between feel, believe, and think, can be the difference in communication or miscommunication.

Between used with numbers is another good example of the effect of word choice. When we write between 5 and 10, do we mean 6, 7, 8, and 9 or 5, 6, 7, 8, 9, and 10? Correctly it is the former, but many authors intend the latter. If the latter is meant, it is more precise to write from 5 to 10 as that includes the beginning and ending numbers. Is the distinction important? Think about a book describing space travel and the number of years it would take to get from point A to point B. If I write between 5 and 10, the reader can deduce that it will take 6, 7, 8, or 9 years, whereas if I write from 5 to 10, the reader can deduce it will take as few as 5 years or as many as 10 years or some number of years between 5 and 10. The latter conveys the broader range, the former conveys the narrower range.

A professional editor helps the author make these correct word choices. Where the correct choice matters, it can be the difference between clear communication and miscommunication.

March 11, 2010

On Words: Filibuster

I’m in frustrated-angry mode. My local power utility (read monopoly) has raised its rates twice in 12 months, and has applied for a third rate increase. My Internet/TV/telephone package rate has gone up because they added cable channels that I’m not interested in ever watching (truth be told, I don’t ever watch TV and have the cable TV only because my wife insists).

But the final blow came in the mail from my health insurance company: our rates are going up 25%. The excuses given include higher New York State taxes (mine have gone up significantly, too), increased use of health care services by others, federal expansion of COBRA, a large number of H1N1 flu cases, and federal expansion of “large group” mental health and substance abuse coverage (we are a small group).

Then I read the latest on healthcare reform in Washington, DC — the movement that appears to be going nowhere fast — and how a filibuster is threatened should a bill come to the floor of the Senate. Setting aside my frustration with politicians who think first about lining their pockets and last about their constituents, I wondered about the origins of the word filibuster. There is a certain Kafkaesqueness, a certain Alice-in-Wonderland-ness about the word that intrigues me.

The American Heritage Dictionary of the English Language (4th ed.) defines filibuster as “1a. The use of obstructionist tactics…for the purpose of delaying legislation.…2. An adventurer who engages in a private military action in a foreign country” The first definition is what we commonly understand, but the second is closer to the word’s roots.

In early American English, the spellings were fillibustier and flibustier. Sometime in the 1850s the spelling changed to the current filibuster. An early English spelling (16th century and earlier) was flibutor, which was borrowed from the Dutch word for freebooter (vrijbuiter); an earlier version of flibutor was fribustier, confusing its origins.

Filibuster is French in origin, coming from flibustier, referring to pirates who pillaged the Spanish West Indies during the 1700s. In the 1800s, the word’s origins shifted from French to the Spanish usage and meaning. Filibustero, the Spanish version, which also meant freebooter or common rover (as opposed to a buccaneer; buccaneers were French settlers who were hired as hunters by the Spanish. When they were later driven out, the buccaneers turned to plundering, thus morphing buccaneer‘s meaning from hunter to pirate), was used to describe Americans, primarily Texans, who incited insurrection against Spain in Latin America.

Probably the best-known filibusteros were those who joined Narcisso Lopez’s Cuban insurrection in 1850-1851, and those who followed William Walker’s insurrection against Sonora, Mexico (1853-54) and against Nicaragua (1855-58). As reported by the Lawrence (KS) Republican, June 2, 1857, “Walker, the filibuster, has been forced to capitulate.”

This sense of filibuster (freebooter, revolutionist, insurrectionist) remained in use for decades and was used to describe other persons whose tactics were similar to those of the American filibusters. For example, an article in Knowledge (1887) said: “What were the Normans…but filibusters? What were the Pilgrim Fathers but filibusters?” Columbus and William the Conqueror also were called filibusters. But this sense has, for the most part, faded away as the political sense has gained use, although it isn’t clear to me that this original sense isn’t an apt description of today’s filibusters.

One of the earliest uses of filibuster in the sense we think of it today, that is, as a tactic by a member of the legislative minority to impede action by the majority, was by the Portland Oregonian (February 5, 1853): “Filibustero principles do not appear to meet with much consideration from the southern members of congress.” In 1889, the Boston Journal (January 14) noted that “The surrender of legislative functions by the majority of the House and the carrying on of business…only by a humiliating ‘trreaty’ with a single determined filibuster is something entirely anomalous in a country…governed by majority action.”

Of course, in the early days of legislative filibustering, filibusters were required to speak — Jimmy Stewart’s “Mr. Smith Goes to Washington” in real life — and garnered little sympathy when they could no longer command the floor. As the New York Times (January 31, 1915) wrote: “The Senate sits…and the overwhelmed filibusters simply cannot talk.” Two weeks later, the New York Times (February 16) reported: “The Republicans will filibuster…against the cloture rule.” How little has changed in 95 years!

This action, speaking for the sole purpose of consuming time, was the required method prior to the Senate becoming a gentlemen’s club at taxpayer and citizen expense. Now the excuse is that Senators have other important business to attend (e.g., fundraising, violating ethics, lobbying against the interests of their constituents); so why waste time listening to endless speech making? The Congressional Record of February 11, 1890, noted that “A filibuster was indulged in which lasted…for nine continuous calendar days.” Just think — 9 days of legislative peace!

But there was a spark of humor in the annals of senatorial filibustering. Consider this Chicago Times (July 22, 1947) report of a filibuster: “You’re filibustering against the wrong bill, Senator–the resolution before the Senate is for adjournment.” Now if only the American voter could filibuster, perhaps we could put an end to Washington gridlock.

One final note: I am intrigued that both the act and the actor are called filibuster. Why is the actor not called filibusterer?

February 24, 2010

On Words: Believe and Know

Several events in the past few weeks suddenly converged in my mind, causing me to realize that in the discourse about ebooks, especiall about what constitutes fair ebook pricing, the unbridgeable divide is between believe and know.

The first events were discussions about ebooks and what constitutes fair pricing for an ebook. Three types of people participated in those discussions: those who admittedly had no direct knowledge of the costs involved in publishing an ebook, those who did have direct knowledge, and those who believed they knew. As is typical of such discussions, those who admitted not knowing were open to learning and the other two types were trying to teach. But between the teachers there was no room to compromise; those who believed they knew — the believers — simply would not consider or accept that believe and know are not synonymous, that there is a chasm between the two words.

Then came the New York Times Magazine article, “How Christian Were the Founders?”, which discussed the efforts by pressure groups in Texas to shape the secondary school curriculum by requiring textbooks to reflect their view of history. This pressure was previously applied to the science curriculum, the Kansas school board fight having made national headlines.

The article and the ebook fair-pricing discussions brought to mind this war between two words: belief and knowledge. The core problem in the discussions about both pricing and textbook content is the chasm between believe and know.

Believe, although having some slim foundation in evidence, signifies something unprovable (or perhaps less provable), and thus less firmly based in evidence, than know, whose foundation is firmly based on the provable and demonstrable. For example, we may believe there is minimal cost to creating an ebook of a pbook, but we do not know what that cost is — we can’t prove it or demonstrate it. The same concept holds true for any belief, whether economic, cultural, religious, scientific, or something else.

Unlike know, believe covers a wide range of credulity. Know is more constrained; its verity must be demonstrable. Believe needs no more than the statement “I believe” something to be true, leaving it to the listener to supply the factual base — no matter how slim or wobbly — for where to place the belief on the continuum that ranges from pure speculation to pure fact.

Believe denotes the acceptance of the truth or actuality of something, that it is real, even if it may not be real. For example, the belief that because an ebook is a digital file of the pbook, there is no cost to creating the ebook. Know, on the other hand, has its basis in experience rather than acceptance, such as the experience of smelling a rose. Having never smelled a rose, I could say that I believe the rose’s fragrance is similar to that of a skunk; but having smelled both a rose and a skunk, I could say I know that the rose’s fragrance is dissimilar to that of the skunk.

A belief statement might ultimately prove correct, but then believe would transform itself to know. The know statement, however, cannot be transformed from know to believe. Once I have smelled both the rose and the skunk, that I know doesn’t change. What I know might change, but not that I know.

Believe embraces the possibility of doubt: No matter how firmly one believes something, by describing that conviction as a belief, one ascribes some doubt, albeit infinitesimally small, as to the verity of that belief. In contrast, know doesn’t permit that possibility of doubt; it doesn’t permit any doubt: I believe the rose smells like a skunk, but I know it doesn’t.

It is the improper use of these two words that leads to the ongoing cultural wars that are reflected in the battles over what should and should not be taught in school and what is or is not a fair price for an ebook. Too many people equate believe with know. They are neither the same nor does each include the other. It is when believe transforms to know that fact is possible, but until that transformation occurs there is always some doubt.

Interestingly, know not only cannot transform to believe, but it cannot embrace believe as a component of itself. To do so would be to weaken know and impose that element of doubt that distinguishes it from believe. In this instance, know must stand aloof and by itself.

Would proper use and understanding of these words deflect any of the passionate discourse that surrounds “I believe” statements in the cultural and ebook pricing wars? I doubt it would matter. There are some things that we grasp and cannot let go, that are beyond believe and know in the sense of a willingness to transform from the former to the latter; after all, we invented these words as a method of describing those immutable beliefs and distinguishing between possibility and fact. But proper use and understanding might shine a different light on the divide and permit a coming closer together. Unlike conflicting knowledge, it is impossible to reconcile conflicting belief, which is why we can expect the question of what is fair ebook pricing to remain unresolvable.

February 10, 2010

On Words: Mugwump

The political partisan divide gets deeper daily. The electorate can’t be counted on to vote in accord with their party registration. Politicians are increasingly nervous that if they do not tilt further to the left or right, they will not be electable. Interestingly, in today’s partisan politics being a centrist seems to ensure that one will not get elected to political office. Makes me wonder if we voters simply want to elect someone we can complain about.

But that aside, the issue today is one of mugwumpery. Can we fickle voters who have registered our loyalty as Republican or Democrat but then desert the anointed party candidate stake a claim to being mugwumps? The bumper sticker possibilities seem endless:

  • Make mugwumpery a daily rite!
  • When the impossible needs doing call a mugwump!
  • Mugwumps brew their own tea!
  • Mugwumps don’t like tea parties!
  • I’m more than a partyer, I’m a mugwumpian!

The sound alone makes me want to proclaim: Mugwumpery — today, tomorrow, forever!

Mugwump (n.) originally referred to an Algonquin chief (mugquomp); John Eliot used the word in his 1663 Indian Bible. Consequently, mugwump became associated with “an important person.” Over the years, however, it became transformed from serious to ironical. For example, in 1835, it was used as follows: “This village, I beg leave to introduce to the reader, under the significant appellation of Mugwump, . . . used at the present day vulgarly and masonically, as synonymous with greatness and strength.”

But it was the presidential election of 1884 between James Blaine and Grover Cleveland that gave mugwump its political meaning. Blaine, the Republican candidate, was disliked by a group of influential Republicans who announced their support for the Democrat Grover Cleveland. The New York Evening Post (June 20, 1884) wrote: “We have yet to see a Blaine organ which speaks of the Independent Republicans otherwise than as Pharisees, hypocrites, dudes, mugwumps, transcendentalists, or something of that sort.” Time (January 12, 1948), speaking of Truman’s election, wrote: “The Mugwumps of 1884, for much the same reason deserted James G. Blaine and helped elect Democrat Grover Cleveland.”

But mugwump wasn’t reserved solely for those who deserted Blaine for Cleveland. There were also Democrat mugwumps, Democrats who deserted Cleveland for Blaine. The Boston Journal (January 21, 1885) reported: “There is a row . . . between a Democrat and a mugwumpian Democrat.”  The Nation (April 14, 1887), gave mugwump a nonpartisan life: “The municipal election in Jacksonville, Fla., last week was another victory for nonpartisanship, and showed that Mugwumpism is growing in the South as well as in the West.”

Even the New York Times was called mugwumpian. The Voice (September 1, 1887), wrote: “Our esteemed Mugwumpian contemporary, the New York Times, is very solicitous for the Republicans to make concessions to the Prohibitionists.”

So mugwump, politically speaking, was first a disaffected Republican, became an Independent Republican, and ultimately moved to total independence. The definition became “a person who withdraws his support from any group or organization; an independent; a chronic complainer who doesn’t take sides.”

Seems to me that we need another political movement in America and I suggest we call it The Mugwump Party of America. So, my fellow, Mugwumpians, shall we gather at Independence Hall on July 4?

February 5, 2010

On Words: Bellum

The American use of bellum, the Latin word for war, is interesting. In American history, the antebellum and postbellum periods are the pre- and post-Civil War, respectively. Neither word is associated with any other war, just the Civil War.

Also interesting is that bellum doesn’t have its own entry in either Merriam-Webster’s Collegiate Dictionary (11th ed.) or the American Heritage Dictionary (4th ed.); the entries are either antebellum or postbellum.

I’ve been word sleuthing, trying to discover definitively why Americans associate bellum only with the ante and post Civil War time frames; why not, for example, ante- and post-Vietnam War? My guess is that newspapers and magazines of the post-Civil War era used the phrases and they became associated with those eras.

In 1867, the Fredericksburg News (Fredericksburg, VA) ran a headline “Attention, Ante Bellum Debtors to the News.” In 1878, the North American Review wrote: “To go back to Ante-war money, Ante-war wages, and ante-war prices, might be tolerable if, at the same time we could go back to ante-war freedom from debt and ante-war lightness of national taxation.” Southern Magazine (1874) wrote of Atlanta: “It looks so little like a post-bellum town.”

Although bellum simply means war and antebellum and postbellum can be used to describe and pre- and post-war period, American usage of these terms appears to be confined to the pre- and post-Civil War periods because of the impact the Civil War had on both American history and American psyche. Limiting words to specific time frames gives structure to the words.

Using a Latin substitute, bellum, for an English word, war, probably would be rejected by many Americans. The Middle English origins of war (werre) indicate a long rejection of bellum for everyday discourse (imagine hearing, “We’re off to bellum!” — it doesn’t strike a very war-like chord — or “Bellum! Bellum! Bellum!”). But antebellum and postbellum, as descriptors of a 30-year period on either side of the Civil War, strikes a better sounding chord, especially when joined with south.

If someone has a better idea or more information on why and how antebellum and postbellum became associated with 1830-1861 and 1866-1890 eras of American history, let me know.

« Previous PageNext Page »

Blog at

%d bloggers like this: