An American Editor

September 28, 2012

Articles Worth Reading: The Lie Factory

I’ve decided to start a new column that I’m calling Articles Worth Reading. I subscribe to a lot of magazines and websites — I’m always behind in my reading — and occasionally I read an article that I think deserves wider attention. Those articles will be the subject of this new column.

The inaugural article is Jill Lepore’s “The Lie Factory”, which was published in The New Yorker in its September 24, 2012 issue. The article tells us about the founding of what was then a new industry, political consultancy, in the 1930s. The article explores the roots of what today is a never-ending, high-grossing, must-have business that permeates all of politics.

I invite you to read and enjoy the history of the founding of political consultancy in

“The Lie Factory” by Jill Lepore

September 26, 2012

The Business of Editing: Beware Office for Windows RT

Coming soon to virtually everywhere is the release of the new version of Windows — Windows 8. It will be released in basically two forms: RT for tablets and other devices using the ARM processor and a PC version for devices that use non-ARM processors. So far, so good.

The problem is not with Windows 8 per se. The problem is with Microsoft Office 2013, which is scheduled to be released shortly after the release of the Windows operating system. Office for devices that will run Windows RT will be crippled from an editor’s perspective. As the Office Next blog says,

Office Home & Student 2013 RT is Office running on the ARM-processor based Windows RT OS. It is full Office built from the same code base as the other versions of Office, with small changes that were required as a result of differences between Windows 8 and Windows RT.

But those changes aren’t necessarily small, at least not for the productive editor. According to Microsoft, the following are the primary differences between Office for Windows RT and Office for the PC; that is, Office RT lacks these capabilities:

  • Macros, add-ins, and features that rely on ActiveX controls or 3rd party code such as the PowerPoint Slide Library ActiveX control and Flash Video Playback
  • Certain legacy features such as playing older media formats in PowerPoint (upgrade to modern formats and they will play) and editing equations written in Equation Editor 3.0, which was used in older versions of Office (viewing works fine)
  • Certain email sending features, since Windows RT does not support Outlook or other desktop mail applications (opening a mail app, such as the mail app that comes with Windows RT devices, and inserting your Office content works fine)
  • Creating a Data Model in Excel 2013 RT (PivotTables, QueryTables, Pivot Charts work fine)
  • Recording narrations in PowerPoint 2013 RT
  • Searching embedded audio/video files, recording audio/video notes, and importing from an attached scanner with OneNote 2013 RT (inserting audio/video notes or scanned images from another program works fine)

The key difference for an editor, I think, is the inability to use macros. The lack of macro support is an absolute deal breaker for me, and means I will not even consider buying an ARM-based device, which leaves out the Microsoft Surface RT tablet (the higher end Surface Pro will be using an Intel processor and so will use Office Pro, which is the standard desktop version of Office; see this comparison of iPad with Surface RT and Surface Pro by Laptop Magazine).

I have spoken with several editors who are either currently using tablets or are thinking of buying a tablet in the near future. I have to admit that the idea of the tablet intrigues me as a tool for editing, although I suspect I would quickly miss my three 24-inch monitors. I suspect that the tablet would end up like my laptop — brought out only a couple of times a year when I’m traveling and never really doing any real work on it.

(My laptop is at least 6 years old and is still in excellent shape. It runs Windows 7 and Office 2010 without a problem, albeit more slowly than a newer laptop would. But what I found with my laptop is that I wasn’t very productive when it came to editing because of its format and because I couldn’t hook up my three large monitors; the laptop has the ability to use the built-in 17-inch monitor plus one additional monitor. I’ve become spoiled by my three 24-inch pivoting monitors.)

Right now, I can’t see justifying the expense of buying the Surface Pro or a similar tablet and I wouldn’t consider an ARM-based tablet now that I know macros Office would be crippled in the RT version, which would set my productivity back to the stone age of editing.

But I am planning on buying Windows 8 and Office 2013. Microsoft plans to offer a great upgrade deal for users of Windows 7 wanting to migrate to Windows 8 — $49 for the software. Even if I don’t install it right away, I’m going to buy it at that price. One of the reasons I am interested in Windows 8 is because Microsoft has finally developed what looks like a great cross-platform operating system (OS). I have been holding off upgrading my 8-year-old cell phones because I would like to get a Windows 8-based cell phone, too. I’m one of those people who likes to make life simple and easy, and using the same OS on my desktop, my laptop/tablet, and my cell phone strikes me as being the easy path to take. I may be wrong, but that’s the plan.

In any event, those of us who are dependant on Microsoft Office for our editing need to be cautious about deciding which tablet, if any, to buy, if the tablet is going to be used regularly in our business as a laptop and/or desktop replacement. It appears that Office 2013 will not be available for the iPad; instead iPad users will have to use Office for the Web, which raises other worries for me (see The Business of Editing: What Happens When the Cloud Isn’t Available?).

If all I want is a tablet that will give me e-mail and Internet access, I have one already: my Nook Tablet. If I want a professional’s tablet, that is one that gives me access to all the tools I use as a professional editor, I will have to look at the Surface Pro or an equivalent from other makers. If I simply want to get my work done in the most efficient manner I can, I’ll save my money, stick with my desktop and its three monitors, and go the cheap route, buying the Windows 8 OS and Office 2013 Pro upgrades. Right now, it’s looking like a safe bet that I will choose the latter path.

What plans do you have?

September 24, 2012

The Business of Editing: Light, Medium, or Heavy?

One of the things I have never understood about my business is the concept of a client wanting a light, medium, or heavy edit. I’ve never understood it because these are words that really have no meaning when spoken in conjunction with edit.

(It is probably worth noting that these terms are used by publishers, not by authors. In the past, a manuscript was reviewed by inhouse production editors for general problems and for anticipated difficulty of editing. The terms were then used to justify a lesser or higher fee to the copyeditor. Today, most publishers have a single fee and only skim the manuscripts inhouse. No author has ever used those terms when describing what is wanted from me when hiring me to edit his or her manuscript.)

A professional editor gives a manuscript the edit it requires within the parameters of the job for which the editor was hired. If a client says to ignore references, I may ignore references, but if a client says a manuscript needs a heavy edit, I haven’t got a clue of how my editing would — or should — differ from what I would do had the client asked for a light edit.

The three terms, instead, are signals to me as to how problematic the client believes a manuscript is. When a client asks for a light edit, I understand it to mean that the client believes the manuscript is in pretty good shape with no structural flaws and minimal grammar and spelling errors. Conversely, a heavy edit indicates to me that there are likely to be numerous structural flaws and lots of grammar and spelling errors, with medium edit falling somewhere between the two extremes.

Yet, there’s the catch. Nearly all clients make the same mistake of confusing copyediting with developmental editing (see, for a refresher on the difference between the two, Editor, Editor, Everywhere an Editor!). In some cases, it is a mistake made out of ignorance; in other instances, it is a deliberate mistake made in hopes (perhaps even in expectation) that the editor will provide a developmental edit at the price of a copyedit.

This comes about because for an editor, there really is no difference between light, medium, and heavy editing. A manuscript gets the edit it needs — except that edit is limited by whether the editor is hired to do a copyedit or a developmental edit. There are boundaries between the two that a professional editor will not cross in the absence of compensation.

Structural problems are a good example. The developmental edit is intended to deal with structural problems but not to focus much on grammar and spelling problems. In contrast, the copyedit is focused on grammar and spelling, and except to note that there are structural problems, ignores structural problems. This is as it should be because the skills required and the time needed vary greatly. It is not uncommon to find that a developmental edit has a speed of 1 to 2 pages an hour, whereas a copyedit runs at 6 to 10 pages an hour.

The use of the terms light, medium, and heavy is problematic because clients and copyeditors are talking past each other when the terms are used. There is no common definition of what they mean and the client’s use is usually based on a false assumption: that the copyeditor will do something different as part of the editing process based on the term chosen.

The assumption is false for many reasons, but the most fundamental reason is that no matter how a client describes the edit, the copyeditor still needs to read and evaluate every word and all punctuation with the goal of ensuring that the manuscript communicates to readers. (Note that I have changed from the broader editor to the narrower copyeditor. This is because the problem particularly arises and is particularly acute when an editor is hired as a copyeditor rather than as a developmental editor.)

In my nearly 29 years of professional editing, I have not changed a single thing that I do as a copyeditor based on whether the client asks for a light, medium, or heavy edit. Copyediting is what it is; it doesn’t change based on light, medium, or heavy.

But those terms do mean something to me as a copyeditor — or at least did in the past, perhaps not so much today. They are flags for the difficulties I can expect to encounter, which means they affect my estimation of the time it will take to edit a manuscript. In past years, I found the terms to be excellent indicators of what to expect; today, I find that they are rarely an accurate indicator. Instead, today, I find that the terms are used as substitutes for whether the manuscript is for a first edition or a revision and for whether the authors are known to be difficult or not difficult to work with.

Invariably, when a publisher hires me to work on a first edition, I am told that the manuscript requires a heavy edit. When I am hired to work on the revision that will be the eighth edition of the book, I am invariably told it requires a light or medium edit, or I am told nothing at all, with the client assuming I understand that only a light or medium edit is required. So, as relatively meaningless as the terms were in the past, they have become even more irrelevant and meaningless today.

Except that I use those terms as a guide to negotiate schedule. For example, I was recently hired to edit a manuscript that was estimated to be 380 pages and that required a heavy edit. The schedule was 2 weeks. I immediately negotiated a longer schedule based on the client’s claim that a heavy edit was required (the sample chapters the client sent didn’t show any unusual problems but there were a lot more chapters yet to come so it becomes a guessing game). I subsequently renegotiated the newly negotiated schedule because when I received the complete manuscript, the page count was 490 — the combination of a heavy edit and more pages warranted a longer, just-in-case,schedule.

I think editors need to clearly separate what tasks they will do based on the type of edit — copyedit or developmental edit — that a client asks for and ignore requests for a light, medium, or heavy edit except insofar as such terms are viewed as descriptors of the number and type of problems anticipated and how they might affect the editing schedule. After all, how would you edit any differently a manuscript that was to be lightly edited from one that was to receive a medium or heavy edit? Wouldn’t you (don’t you) do all the same things regardless of the characterization of the edit?

One last note: Some clients do, in fact, pay more for a heavy edit and less for a light or medium edit. The number of publishers doing so is rapidly declining as the squeeze on editorial costs increases. But if you do have such a client, then the characterization is also important for setting the fee. Where this is the case, a more thorough evaluation of the manuscript is necessary to ensure that it has been properly characterized — especially as copyeditors do all the same things regardless of the characterization of the edit.

September 19, 2012

The Business of Editing: Macros for Editors and Authors

Times are getting tougher for the editing community. As has been discussed in earlier articles, pressure is being exerted by the publishing community to lower fees and what should be a natural market for editors — the indie author market in this age of ebooks — has not really developed as expected. Too many indie authors are unable or unwilling to spend the money for a professional editor, and too many of those who are willing to spend the money, don’t know enough about finding and evaluating an editor, and so are dissatisfied with multiple aspects of the author-editor relationship and help fuel the do-it-yourself school.

In light of these tougher times, the professional editor has to look at what investments he or she can make that will ultimately generate profitability, even if fees are lowered or remain stagnant. As I have mentioned in past articles, a major contributor to profitability is the purchase and use of software like EditTools, Editor’s Toolkit Plus, and PerfectIt. (For general overviews of these programs and their respective roles in the editing process, see The 3 Stages of Copyediting: I — The Processing Stage, The 3 Stages of Copyediting: II — The Copyediting Stage, and The 3 Stages of Copyediting: III — The Proofing Stage. In The Professional Editor: Working Effectively Online II — The Macros, I discussed macros more specifically.) Yet in recent months, I have received inquiries from fellow editors asking about increasing productivity using macros in a more detailed manner. So, perhaps the time is ripe to address some of the EditTools macros in detail.

When I edit, always in the forefront of my thinking is this question: What can I do to further automate and streamline the editing process? What I want to do is spend less time addressing routine editing issues and more time addressing those issues that require the exercise of editorial judgement and discretion. I want to undertake the routine endeavors as efficiently and profitably as I can; I do not, however, want to sacrifice editorial quality for editorial speed. (Because I often work on a per-page basis, speed is a key factor in determining profitability. However, even when working on an hourly basis, speed is important; because few clients have unlimited budgets for editing, it is important to maintain a steady rate of pages per hour.)

The editing process is additionally hampered by the growth of the style guides. With each new edition, the manuals get larger, not more compact, and there are numerous additional variations that have to be learned and considered. (How helpful and/or useful these guides are is a discussion for a later day.) I have discovered that no matter how well I have mastered a particular style guide, the inhouse editor knows the one rule that has slipped by me and wants to question why I am not following it, no matter how arcane, nonsensical, or irrelevant the rule is.

It is because of the increasing difficulty in adhering to all the rules of a particular style guide — especially when the style guide is supplemented with a lengthy house style manual that has hundreds of exceptions to the style guide’s rules, as well as hundreds of errata released by the style guide publisher, which are not readily accessible — that I increasingly rely on macros to apply preferred choices.

A key to using macros, however, is that they are used with tracking on, but only when appropriate (an example of inappropriate is changing a page range such as 767-69 to 767-769 in a reference cite or changing two spaces to one space between words; an example of appropriate is changing 130 cc to 130 mL or changing which to that). Tracking acts as a signal to me that I have made a change and lets me rethink and undo a change. Consequently, most macros in EditTools, by default, work with tracking on.

(One caution when using tracking and macros: Some macros do not work correctly when tracking is on. That is because the “deleted” or original is not really deleted as far as the computer is concerned in many page views. Basic Find & Replace works well with tracking on, but the more sophisticated the Find & Replace algorithm and the more that a macro is asked to do, the less well tracking works. Consequently, I make it a habit, particularly when using wildcard find and replace macros, to run the macros with tracking off.)

I know that I am focusing on increasing an editor’s profitability, but many of the macros in EditTools are usable by authors who are reviewing their manuscript before sending it to a professional editor for editing. What helps make a good editing job also can help make a good writing job! The two processes, although different, are not so distinct that they diverge like a fork in the road. Being sure that “Gwun” is always “Gwun” and not sometimes “Gwin” is important to both the author and the editor.

Unfortunately, both authors and editors tend to think in a singular way; that is, if they are uncomfortable writing and creating macros, they simply forget about them. Authors and editors seek their comfort zone when it comes to production methods because they do not see the production methods as enhancing their ultimate output. This is wrong thinking.

Let’s assume that an author has decided to name a character Gwynthum. The way I work is to enter the name Gwynthum in my Never Spell Word macro’s database for this book (along with other entries such as [perhaps] changing towards to toward, foreword to forward, fourth to forth, other character names, place names, and the like) and I then run the macro before I begin editing. An author would make these entries before doing the first review of the manuscript. Running the macro before I begin alerts me to some problems and fixes others.

Every time the macro comes across Gwynthum in the manuscript, it highlights it in green. Should I then, as I am editing, come across Gwythum or Gwynthim or some other variation, it would stand out because it is not highlighted in green. Similarly, the macro would change every instance of fourth to forth, but do so with tracking on and by highlighting the change with a different highlight color. This would bring the change to my attention and let me undo the change if appropriate.

(In the case of homonyms like fourth and forth, foreword and forward, and their and there, I make use of EditTools’ Homonym macro and database and do not include the words in the Never Spell Word macro. Rather than changing fourth to forth, the macro highlights the word in red, which tells me that I need to check that the word is correct in context. The homonym macro is a separate macro and has its own database, one that you create. So if you know that you have problems with where and were but not with their and there, you can put the former in your database and omit the latter.)

As noted earlier, the same tools that benefit editors can benefit authors who are preparing their manuscripts for submission to an editor, or even thinking about self-editing their manuscript. Thinking a little outside one’s comfort zone and making the best use of editing and writing tools can improve a manuscript tremendously, and for authors, can help reduce the cost of professional editing.

In later articles in this series, I will go into detail about how to use some of the macros that make up the EditTools collection. However, it must be remembered that macros are mechanical, unthinking tools. No editor or writer should think of macros as a substitute for using independent judgement; rather, macros should be looked on as being an aid to creating a more perfect manuscript.

Richard Adin, An American Editor

September 17, 2012

On Language: The Fallacy of Not Splitting the Infinitive

Filed under: On Language — Rich Adin @ 4:00 am
Tags: , , ,

Rules of grammar are good and important. They are good because they act as a guide; they are important because they provide us a way to communicate with each other so that we understand what each participant to a conversation is trying to convey.

The flipside is that grammar rules are also bad. They allow a “noted and respected” language commentator to “definitively” determine (or should it be “to determine ‘definitively'”?) what is and is not acceptable grammar. To some authors, editors, and grammarians, the rules are rigid and unyielding. Cite the rule and follow it — or else! Alas, the rules are really like clothing — fashionable today, unfashionable tomorrow.

This is such a tale — a tale of the bad side of the “rules” of grammar; the dark side, if you will. This is the tale of splitting the infinitive!

We all know modern English’s most famous split infinitive: “To boldly go where no man has gone before.” Granted, that is not the entire sentence or paragraph (which was: “Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.”) as spoken by William Shatner at the beginning of virtually all of the original Star Trek television series, yet it is the phrase to which most of us turn when we want to justify splitting infinitives.

We need not rely on that quote to justify splitting infinitives. In fact, it is those who oppose ever splitting infinitives, or who believe it can be done in exceptional circumstances only, such as in the above quote, who have the tougher road to travel.

English has been a language of split infinitives since at least the early 1300s. For hundreds of years, no complaint was heard, not a grammarian rose in opposition, until the mid to late 1800s. Suddenly, English needed to be raised from its common roots to the heights of perceived linguistic nobility. After all, England and English were conquering the world — the sun never set on the British Empire — and what good was it to be a conqueror if one’s language was barbaric. Okay, I admit that I really don’t know that this is the reason, but it is as good a tale as any, because otherwise there really is little reason for the sudden change in what is and is not kosher grammar.

The change did come about, however, as grammarians began to identify English with what they considered the epitome of language: Latin. Latin was a “pure” language, especially compared to English. If there ever was a born bastard, its name is English. Unlike Latin, which was reluctant to adopt and incorporate other languages, English had no pretensions of nobility or pure blood; English was (and is) a working language that will adopt and incorporate words from anywhere. It is malleable. Unlike Latin, which was stiff and which is now dead, English is flexible and living. Like how it is said in Russian? No problem, English will make it its own. We use vodka, for example, as if it was always an English word. English is an aberration; just ask the French Academy of Language (L’Académie française), which strives to preserve a “pure” French language.

The prohibition against splitting infinitives came about in the late 1800s as grammarians increasingly tried to equate English with one of its many forebears — Latin. Grammarians tried to apply Latin’s rules of construction to English, causing consternation for generations of school children. The application of Latin construction rules to a language as unstructured as English was (and is) problematic at best, impossible at worst. But fashion is fashion and if one wants to be king of one’s niche in the world, one must be fashionable.

Consequently, once the rule against splitting infinitives gained some traction, many of the leading grammarians jumped on the bandwagon. One’s reputation as a grammarian was at stake and fashion leads by the nose.

Unfortunately for the grammarians, the mass of English speakers and writers are resilient and reluctant to give up what sounds good — and what conveys the proper message — and so although we often cite the rule against splitting infinitives, we give it the honor it deserves by ignoring it. As Bryan Garner writes (Garner’s Modern American Usage, 2009, p. 767): “Although few armchair grammarians seem to know it, some split infinitives are regarded as perfectly proper.” I would go a step farther and say that split infinitives are proper unless they might cause a miscommunication.

This is an important symbolic issue. Too many editors and writers are adamant that one never splits an infinitive. (There is also the problem of recognizing when an infinitive is being split, but that is a tale for another day.) These editors and writers are correct — if they are writing in Latin. If they are writing in English, infinitives are more often split than not, and correctly so. English does not adopt the grammar rules of French or German simply because it has incorporated words from those languages in its lexicon. Similarly, it does not — and should not — be hamstrung by construction rules of a dead language, especially in light of how many non-Latin-origin words make up the living, flexible potpourri language we call English.

A good author and a good, professional editor will be guided by the fundamental question of grammar: Does the construction facilitate understanding or misunderstanding? Professional editors need to think of split infinitives in much the same light as they should think about commas: to use Lynne Truss’ example, is it “eats, shoots, and leaves,” or “eats shoots and leaves,” or “eats, shoots and leaves,” or “eats shoots, and leaves”?

We need to boldly go where English has been before and accept split infinitives as the norm.

September 12, 2012

Bye Bye $9.99 and Price Competition in eBooks

The mantra for many ebookers over the past year or so was “get rid of agency pricing and bring back lower ebook prices based on competition.” These ebookers are ecstatic over the approval of the settlement terms in the Department of Justice’s lawsuit against five of the Agency 6 publishers and Apple by Judge Denise Cote on September 6, 2012.

I think it is way too early to celebrate and I think ebook prices of bestsellers will rise, not become lower.

To set the mood to say goodbye to $9.99, here is a song from the past — Don McLean singing his Bye Bye Miss American Pie:

Now that you’ve been entertained, let’s discuss why I think we can say goodbye to the $9.99 bestseller and to real price competition among the big publishing houses which control the majority of popular publishing today.

The first problem lies within the settlement agreement itself. As Judge Cote wrote (p. 10 of the Opinion & Order filed September 6, 2012), the publishers, although they cannot use agency pricing, which presumably means a return to the wholesale pricing of the preagency days, can “enter into contracts that prevent the retailer from selling a Settling Defendant’s e-books at a cumulative loss over the course of one year.” This is a threefold problem for consumers.

First, it means that publishers will be able to require Amazon (and/or Barnes & Noble and/or Apple and/or all other ebooksellers) to disclose both sales numbers and pricing, something that Amazon has been loathe to disclose even to its shareholders. Under the current system of no such requirement, a publisher knows how many of a title have been sold by Amazon because Amazon has to pay for each title sold. But what has not been known, and what every analyst wants to know, is whether the sales are profitable, not just how many units are sold. Analysts want to know whether Amazon has sold 1 million ebooks and made or lost $5 million from the ebook sales alone. And knowing that information, analysts can determine whether or not Kindle hardware sales are profitable — all information that Amazon has steadfastly refused to isolate.

This is problematic because if Amazon has to verify that over the entire line of, say, Macmillan ebooks it is making a profit — and note that it is over the entire Macmillan line, not over the combined lines of Macmillan and Simon & Schuster — Amazon will have to be very cautious about pricing. One cannot easily take a loss on a million-selling ebook in hopes that over the course of the next months it will sell enough ebooks from that publisher to end the year in profit. How likely is it that Amazon will take that gamble and reinstitute $9.99 pricing?

The second reason this is problematic for consumers is because the order essentially orders a return to the wholesale pricing scheme but sets no boundaries on that scheme. There is nothing to prevent the publishers from altering the discount rate or even giving a different discount rate to different ebooksellers. As part of its order, the court did away with the most-favored-nation clause, which said whatever terms you give X you must give me.

I know the response to this is that the publishers need Amazon more than Amazon needs the publishers. I think, however, that Amazon’s caving in to Macmillan when Macmillan demanded agency pricing demonstrates that it is the publishers who are in the catbird’s seat, not Amazon. Amazon is the seller of product and thus needs product to survive. Each of the Big 6 publishers controls a significant portion of the necessary product that Amazon cannot afford to do without. Besides, I expect that each of the publishers will come, independently, to the position of squeezing Amazon similarly, so Amazon will have little recourse, just as it had little recourse in the Macmillan dispute.

The third problem for consumers is that the answer to the worries of the publishers that brought about agency pricing is simply raising the list price of newly published books. The way publishers do this is to take an expected blockbuster and raise its price to the new price point and watch sales. If expected sales (or close thereto) occur, then the next expected blockbuster is given the same price point, and this is repeated until there is confidence that consumers are now expecting to pay the price point.

And this is already beginning. J.K. Rowling’s new book, The Casual Vacancy, a Little, Brown, imprint, has a new price point for a novel: $35. If you check Amazon and Barnes & Noble, you will find that the ebook price at both is $17.99 — a far cry from the previous bestseller price point of $9.99 at Amazon. And the $17.99 is a 49% discount off the list price, which means that the ebook is likely to be generating a 1% gross profit for the retailers, just barely meeting the condition in the settlement order. I see this as an indication that the ebooksellers are concerned about profitability over the entire Little, Brown ebook line over the coming year.

Under the agency system, it would have been expected that Rowling’s new ebook would carry a price no higher than $14.99.

There is also the question of whether Amazon has gotten used to actually making money on ebooks and is using the profit to subsidize the Kindle hardware. Nate Hoffelder raised this question in Did the Agency Model Lead to Cheap eReaders? at his The Digital Reader blog. Having made money on ebooks over the past year, how likely is it that Amazon will want to subsidize both the hardware and the content, perhaps taking a loss on both? At some point, Amazon has to show a profit to prevent shareholder rebellion. And now it has the perfect excuse to do so: Judge Cote’s approval of the settlement agreement that allows publishers to require Amazon to earn a profit on ebooks.

It is the combination of forces that have been unleashed by the approved settlement agreement that will result in no agency pricing for at least 2 years but, instead, higher prices for consumers and the end of the $9.99 bestseller price. We may occasionally see a bestseller being offered at the $9.99 price, but it will be the occasional bestseller, not all bestsellers as in the past. And if we watch prices, I think we will see list prices climb; it will be the rare bestseller that will have a list price below $30. Rowling is leading the way and if her book is a bestseller at $35, it won’t take long for other top-tier writers to insist on equal list pricing. That is how it happened in the past and how it will happen this time.

I may be wrong, but I doubt it. History does tend to repeat itself and the DOJ and Judge Cote have let loose a rising tide. Do you agree?

September 10, 2012

Are Free eBooks Killing the Market?

Every day I find another traditional publisher is offering free ebooks. Amazon has made a business out of offering free ebooks. And let’s not forget the many indie authors who are offering their ebooks for free.

What is this doing to the market for ebooks?

I admit that I may be atypical in my buying and reading habits, but I do not think so. I have watched my to-be-read (TBR) pile grow dramatically in the past couple of months from fewer than 300 ebooks to more than 1,100 ebooks. If I obtained not another ebook until I read everything in my TBR pile, at my current average rate of reading two to three ebooks per week, I have enough reading material for between 367 and 550 weeks or 7 and 10.5 years.

How has this impacted my buying of ebooks? Greatly! In past years, I bought ebooks regularly. Granted, I was buying mainly indie and low-priced, on-sale traditionally published ebooks, rarely spending more than $6 for an ebook, but I was spending money.

That has all changed. Now I rarely spend any money on an ebook. In the past three months, the only ebook I paid for was Emma Jameson’s Blue Murder, which is her sequel to Ice Blue (which I reviewed in On Books: Ice Blue), at $4.99. Otherwise, all I have done is download free ebooks.

I understand the reason for giving ebooks away for free. How else are authors to attract new readers? This is particularly true when one considers how many ebooks are published each year in the United States alone — more than one million. Some how one has to stand out from the crowd. But with the ever-increasing number of free ebooks, giving away ebooks is less of a way to stand out.

The problem is that too often all of the ebooks in a series (or at least many of the ebooks in a series) or older, standalone titles by an author are given away. All an ebooker need do is wait. Giving away the first book in a series makes a lot of sense to me. If I like the first book, I’ll buy the subsequent books. But when I see that if I have patience I’ll be able to get the subsequent books free, too, then I don’t rush to buy.

The giving away of the free ebooks has brought about another problem: the decline of the must-read author list. I’ve noted before that my must-read author list has signficantly changed over the past few years. In past years, I had a list of more than 20 authors whose books I bought in hardcover as soon as published; today that list is effectively two authors. My must-read ebook author list has grown, but that is a list of indie authors, not traditionally published authors.

Again, the problem is free ebooks. As a consumer, I like free. However, free has so radically altered my book-buying habits — and I suspect the book-buying habits of many readers — that I find it difficult to see a rosy future for publishers, whether traditional or self-publishers. It is because of this that I wonder what lies behind the thinking of publishers who give their ebooks away, especially those who do so in one of Amazon’s programs.

Publishers who participate in Amazon giveaways double hex themselves. First, they undermine their own argument that ebooks are valuable. Second, they antagonize ebookers like me who do not own Kindles or are not Amazon Prime members and thus unable to get those ebooks for free. I have seen so many ebooks available for free on Amazon that are not available to me for free as a Nook or Sony or Kobo owner, that I have simply resolved, with some limited exceptions, not to buy ebooks. Either I’ll get them for free or not at all.

The Amazon giveaways also tempt me to join the “darkside,” that is, if there is a book in which I am interested, to search for it on pirate sites. The publishers, by their action of giving away the ebook on Amazon, are enticing people to pirate by not making their ebooks free at all ebookstores. When publishers degrade the value of ebooks, their message is received by all readers and is acted on by many readers.

This is a no-win situation for everyone. Ultimately, even readers lose because the incentive to write disappears when there is little to no hope of earning any money for the effort. And even if authors continue to write, the quality of the writing will suffer because no one will see the sense in investing their own money in a product they are going to give away.

It is still early in ebook revolution, so no one really knows what eBook World will look like in a decade or two. But it is pretty clear to me that freebie programs like Amazon’s are detrimental to the overall health of the book market. Authors and publishers should rethink the giving away of their ebooks, other than, perhaps, the first book in a series, before they establish in concrete the reader expectation that “if I just wait, I’ll get it for free, so why pay for it now.” If nothing else, the giving away of ebooks is helping to depress the pricing of ebooks and perhaps driving some ebookers to the pirate sites. My own experience as a buyer of ebooks demonstrates this.

I know that ebooksellers like Amazon are reporting rising ebook sales, but the data I want to see are sales numbers without the one-shot blockbusters and the price levels. The current problem with sales data is that we are seeing only the macro information and so do not know what the real effect free ebooks are having on the market. We are also still in the era of growth in the number of ebookers. When that growth stops, we may get a clearer picture. In the meantime, I know that my spending on ebooks has declined from the thousands of dollars to the tens of dollars and is getting close to zero. I’m sure I’m not the only one who has experienced this decline in spending.

September 7, 2012

A Musical Interlude: Rita Hayworth is Stayin’ Alive

Filed under: A Musical Interlude — Rich Adin @ 11:22 am
Tags: , , ,

Sometimes I come across a video that is worth sharing. The video below, Rita Hayworth is Stayin’ Alive, shows how dance moves over the years remains relevant. By the time the Bee Gees gave us Stayin’ Alive (a hit song from the movie Saturday Night Fever starring John Travolta) in 1977, most of the actors in the video clips were deceased. But to watch the clips against the musical background of the Bee Gees is not only uplifting, but shows that music and dance cross the generations. So, sit back and enjoy Rita Hayworth is Stayin’ Alive with the Bee Gees (and keep an eye out for Frank Sinatra, Gene Kelly, Van Johnson, and, of course, Fred Astaire, among other notable actors who danced with Rita Hayworth) —

September 5, 2012

The Business of Editing: Whom or Who?

Sometimes language usage can be very difficult. This is especially true when we rely on our ears. If a construction doesn’t sound right when spoken, we often assume it cannot be right when written. This is the problem of whom and whowhom often sounds incorrect when it is correct.

Because the growth and modernization of language rarely follows the written-oral (aural) trajectory and nearly always follows the oral (aural)-written trajectory, word usage comes and goes based on the latter trajectory. This has been recognized for years, as evidenced by Grant White’s, an 18th century grammarian, pronouncement in 1870 predicting the death of whom. In his great treatise on the American language , The American Language (1936), H.L. Mencken wrote that “Whom is fast vanishing from Standard American.” The predictions of death have been ongoing, yet whom remains a part of the lexicon.

One problem with whom is that it sounds stilted. A more fundamental problem is that so many people do not understand when to use whom and when to use who.

Who is the subject of a verb (“It was Jon who put out the fire”) and the complement of a linking verb (“They know who started the fire”), whereas whom is the object of a verb (“Whom did you speak with?”) or a preposition (“She is the person with whom we need to speak”).

Yet when I look at the foregoing description, I do not find that my understanding of when to use who and when to use whom is any easier to implement. The subject versus object distinction is helpful but not always clear.

Perhaps a better method for determining which is correct when is what I call the substitution principle, which is found in The Gregg Reference Manual (10th ed., 2005 by William Sabin). According to Gregg (¶1061 c and d),

Use who whenever he, she, they, I, or we could be substituted in the who clause.

Use whom whenever him, her, them, me, or us could be substituted as the object of the verb or as the object of a preposition in the whom clause.

The substitution principle makes the choice easier. Using examples from Gregg, here is how it works, beginning with who:

Who booked our conference?
Who shall I say is calling?
Who did they say was chosen?

The substitutes for the foregoing who examples are, respectively:

He booked our conference.
I shall say he is calling.
They did say she was chosen.

Now let’s look at whom:

Whom did you see today?
To whom were you talking?
Whom did you say you wanted to see?

The substitutes for the foregoing whom examples are, respectively:

You did see her today.
You were talking to him.
You did say you wanted to see her.

The substitution principle seems to work fairly well. Yet it does not avoid the problem of whom sounding stilted and wrong. And because it sounds stilted and wrong, it is likely that it will not be properly used. Perhaps it shouldn’t be used at all.

Perhaps we should rewrite sentences that demand a whom or at least those that make us wonder if it should be whom rather than who. How many of us react positively to “Whom are you going to endorse in the next election?” Whether read or spoken, it comes across as stilted. We are more likely to write and say, “Who is your candidate in the next election?” or even “Who are you going to endorse in the next election?” because it reads and sounds more natural to our aural sense.

The question neither asked nor addressed so far is this: Does it matter if we use who in place of whom? My thinking is that it does not matter because we will write and speak the who sentence in a form that aurally sounds natural and correct, and thus no one will question the use of who and wonder if it should have been whom. Having said that, the reality is that, as with most things in the English language, everything depends on context.

The failure to get who and whom correct in fiction is of less concern than in nonfiction. Yet even in nonfiction, unlike other words, the misuse of who and whom is rarely, if ever, misleading or a cause of miscommunication. To my way of thinking, that is the key to language usage: If there is no miscommunication, then the ultimate goal has been met; if there is a chance of miscommunication, then correction is necessary.

For most of us, it will be the rare sentence that will cause us to pause and remark, “I wonder if this should have been whom and not who.” Should such a case arise (i.e., one where we pause and wonder), we should rewrite the sentence or apply the substitution test and make the correction. In the absence of that  “aha” moment, I think we should simply let the matter go because it is not causing any miscommunication or stumbling.

Yes, there is a grammatically correct usage for who and whom; but the purpose of grammar is to ensure understanding. Automatic application of grammar rules for the sake of applying them does not further grammar’s goal in this case. Flexibility has been the cornerstone of English grammar over the course of time.

What do you think?

September 3, 2012

Choosing Words — Carefully

The advantage writing has over speech is that writing gives the author time to rethink what he or she has written. With speech, there is just that fleeting moment before the words form to think about what is about to pass the lips.

A recent gaffe by Todd Akin, the Republican candidate for the U.S. Senate in Missouri, was a stark reminder of the importance of word choice. Although I will repeat his gaffe in a moment, I do not want to discuss the rightness or wrongness of what he said; rather, I want to focus on choosing words carefully and why it is important for authors to think carefully about their writing, something which too few seem to do.

Todd Akin was questioned about his views on abortion, a very hot topic in American politics, and he said: “If it is a legitimate rape, the female body has ways to try to shut that whole thing down” so the woman cannot become pregnant (emphasis supplied). Politicians seem to be adept at providing editorial fodder.

This is a classic example of the importance of word choice and applying the test of correctness. The test is the anti clause, that is: What is an illegitimate rape? This faux pas by Akin also demonstrates why it is important to consider the appropriateness of a particular word choice. And I’m not referring to the political consequences; instead, I mean the communication-miscommunication conundrum.

Many of us have read at least one of the great Sherlock Holmes mysteries as written by Sir Arthur Conan Doyle. Conan Doyle was one of the great masters of language. Virtually every word was chosen with extreme care because each word could direct one to a clue or misdirect one away from a clue. Sherlock Holmes was a master detective who could see what everyone else missed, but Conan Doyle had to convey what Holmes saw in a manner that would allow the reader to solve the mystery along with Holmes or be confounded and then praise Holmes’ superior acuity when he lays out for the reader all the “obvious” clues. The point is that Conan Doyle had to consider a word and what I call its anti version (i.e., the antiword) to be sure that the word conveyed only the meaning (or obfuscation) that Conan Doyle intended.

I suspect that for Conan Doyle the word and antiword conflict resolution came quickly and easily. Poets, too, seem to have an innate grasp of this concept as they try to convey much by little. But for many of us, it requires some effort. It is clear when reading many novels that for many authors, a conscious effort is needed to resolve the conflict. It is clear because so many do not seem to ever come to grasp with the problem and even fewer seem to resolve it.

(In essence, antiword is a substitute for opposite [as in legitimate vs. illegitimate] but neither opposite nor antonym is, I think, a broad enough term or concept for this problem. I think, perhaps wrongly, that anti, which does imply opposite and antonym but also implies other characteristics, is a better descriptor. Thus my use of antiword.)

Consider whether something is legitimate or illegitimate, as in the Akin quote. In the quote, the question is less whether something is legitimate than whether it is illegitimate. It is the antiword that throws into question the accuracy of the word chosen. For legitimate to be correct, illegitimate must also be correct. Yet illegitimate in the context of the quote is incorrect.

Which brings us to the next step in the analysis: Why is the antiword impossible? Or illogical? Or implausible? Or simply incorrect? In the case of the Akin quote, it is because by definition rape is always illegitimate and therefore the antiword to illegitimate — legitimate — must be incorrect in the sense that there can be no such thing as legitimate rape. (Understand that it is the use of legitimate with rape that presents the problem. Akin could have said “uncoerced sex,” in which case, the antiword coerced is as accurate as its antiword uncoerced and renders a different meaning to the quote.)

I know the argument appears to be circular, but it really isn’t. What it boils down to is that both the word and the antiword must be capable of being correct in the exact same sentence. The Akin quote would more accurately reflect his “claimed” views had he used coerced sex rather than legitimate rape. More importantly, there would have been no miscommunication (which, I know, assumes that there was miscommunication in his original statement).

This is the dilemma that a good writer faces: How does one choose to describe something so as to lead the reader to the conclusion that the author wants? The good writer creates believability when both the word and the antiword can be correct, because the message sent, albeit stealthily, is that “I considered the antiword, but it fails to bring you to where I want you to go, even though it, too, is possible.”

The best storytellers are those who weigh the word and the antiword, even if they do so subconsciously. In fact, I suspect that the better a writer is the more this process takes place subconsciously. But it does take place, which is what matters. That it takes place is what separates the craftsperson-writer from the amateur writer.

The value of the word-antiword process is that it enhances the likelihood that the correct word is chosen and that communication, rather than miscommunication, between author and reader occurs. Anyone can sit at a computer today, pound out a 100,000-word novel, and self-publish it. Very few people can rise to the level of a craft-author, that is, one whose words convey clear, precise meanings and messages. It seems to me that we can see this difference in many forms of writing, including less formal writing such as blogs.

The greater the care that is taken with word choice, the more accurate the communication and the better the writing — a goal to which every author should strive.

Blog at WordPress.com.

%d bloggers like this: