An American Editor

March 27, 2017

The Business of Editing: The AAE Copyediting Roadmap VI

So far I have I’ve created a stylesheet and cleaned the document (see The Business of Editing: The AAE Copyediting Roadmap II), and tagged the manuscript by typecoding or applying styles (see The Business of Editing: The AAE Copyediting Roadmap III), inserted bookmarks for callouts and other things I noticed while tagging the manuscript (see The Business of Editing: The AAE Copyediting Roadmap IV), and created the project- or client-specific Never Spell Word dataset and run the Never Spell Word macro (see The Business of Editing: The AAE Copyediting Roadmap V). Now it’s time to tackle the reference list.

Fixing Reference Callouts

Before I get into the reference list itself, I need to mention another macro that I run often but not on all files — Superscript Me. Nearly all of the manuscripts I work on want numbered reference callouts superscripted and without parentheses or brackets. The projects usually adhere to AMA style. Unfortunately, authors are not always cooperative and authors provide reference callouts in a variety of ways, including inline in parentheses or brackets, superscripted in parentheses or brackets, with spacing between the numbers, and on the wrong side of punctuation. Superscript Me, shown below, fixes many of the problems. (You can make an image in this essay larger by clicking on the image.)

Superscript Me

I select the fixes I need and run the macro. Within seconds the macro is done. One note of caution: It is important to remember that macros are dumb — macros do as instructed and do not exercise any judgement. Consequently, even though Superscript Me fixes many problems, it can also create problems. My experience over the decade that I have been using this particular macro has been that the fixing is worth the errors that the macro introduces, even though they require manual correction during editing. The introduced errors are few, whereas the fixes are often hundreds.

Tip: Superscript me is a powerful, timesaving (and therefore profitmaking) macro, but as noted above, it is dumb and just as it can do good, it can do harm — especially to reference lists. Before using Superscript Me on the manuscript, move the reference list to its own file. Doing so will ensure that Superscript Me makes no changes to the main text material, only to the reference list, saving a lot of undo work.

Wildcarding the Reference List

By this point, the reference list has been generally cleaned and moved to its own file.

Tip & Caution: Wildcard macros can be a gift from heaven or a disaster from hell. I like to do what I can to ensure they are a gift and not a disaster. Consequently, I move the reference list to its own file. I know I have said this before, but wildcarding is another reason for separating the reference list from the manuscript file. Often what I want changed in a reference list, I do not want changed in the primary text; similarly, what I want changed in the primary text, I do not (usually) want changed in the reference list. But like all other macros, wildcards are dumb and cannot tell text from reference list. It can do no harm moving the reference list to its own file and working on it separately from the main text, so be cautious and move it.

Individual problems, however, have not been addressed. I scan the list to see what the problems are and whether the problems are few or many. For example, if author names are supposed to be

Smith AB, Jones EZ

but are generally punctuated like

Smith A.B., Jones E. Z.

or in some other way not conforming to the correct style, I will use wildcard macros and scripts to correct as many of these “errors” as I can. Wildcards can address all types of reference format errors, not just author-name errors. For example, a common problem that I encounter is for the cite information to be provided in this format:

18: 22-30, 1986.


1986 Feb 22; 18: 22-30.

when it needs to be


These formatting errors are fixable with wildcards and scripts.

Scripts are like a supermacro. A script is a collection of many individual wildcard macros that have been combined into one macro — the script — and run sequentially. One of my reference scripts is shown here:

Wildcard Find & Replace Script

In the image, the active script file (#1) is identified and what it does (broadly) is described in the description field (#2). The wildcard macros that are included in the script and the order in which they will run are shown in the bottom field (#3). Included is a description of what each of the included wildcard macros will do (#4). For example, the first wildcard macro that the script will run will change Smith, C., to Smith C, and the second wildcard macro to run will change Smith, A.B., to Smith AB,.

The wildcard macros were created using the Wildcard Find and Replace (WFR) macro shown below. In the image, the example wildcard macro (arrow) is the same as the second macro in the script above, that is, it changes Smith, A.B., to Smith AB,.

Wildcard Find & Replace

Creating the macros using WFR is easy as the macro inserts the commands in correct form for you (for more information, see the online description of WFR). Saving the individual wildcard macros, assembling them into scripts, and saving the scripts, as well as running individual wildcard macros or scripts, is easy with WFR. (For some in-depth discussion of wildcards, see these essays: The Business of Editing: Wildcarding for Dollars; The Only Thing We Have to Fear: Wildcard Macros; and The Business of Editing: Wildcard Macros and Money.)

With some projects I get lucky and the authors only have a few references that are a formatting mess and when there are only a messy few, I fix them manually rather than run the macros.

Fixing Page Ranges

If the references are in pretty decent shape (formatwise) so that I do not need to run WFR, I will run the Page Number Format macro (shown below) to put the page range numbering in the correct format For example, the macro will automatically change a range of 622-6 to 622–626, 622–6, or 622.

Page Number Format

Making Incorrect Journal Names Correct

At long last it is time to run the Journals macro. As my journals datasets have grown, they have made reference editing increasingly more efficient. It takes time to build the datasets, but the Journals Manager (shown below) lets me build multiple datasets simultaneously.

The Journals Manager

As shown in the image, I can build five datasets (arrows) simultaneously. My primary dataset — AMA with Period — has 212,817 journal entries (see circled items).

Tip: Move the reference list to its own file to shorten the time it takes to run the Journals macro. The larger your journals dataset, the more time the Journals macro requires to complete a run. Each iteration of the Journals macro searches from the top of the document to the end as it looks for matches. Leaving the reference list in the manuscript means the macro has that much more to search. In a recent timing test of the Journals macro using my primary dataset and a 50-page document with 110 references without separating out the list, the macro was still running after 2 hours and was not near completion. Running the Journals macro with the same dataset and on the same reference list — but with the list in its own file — took less than 10 minutes. (Think about how long it would take you to manually verify and correct 110 Journal names.)

The Journals macro searches through the reference list for journal names and compares what is in the reference list against what is in the chosen dataset. If the name in the reference list is correct, the macro highlights it in green (#5), as shown below; if it is incorrect, the macro corrects it and highlights the change in cyan (#6). All changes are done with Tracking on.

The Reference List After Running Journals Macro

The Journals macro does two things for me: First, if the incorrect variation of the journal name is in the dataset, it corrects the incorrect journal name so that I do not have to look it up and fix it myself (see #6 above). If the incorrect variation is not in the dataset, the macro makes no change. For example, if the author has written New Engand J. Med but that variation is not in the dataset, it will be left, not corrected to N Engl J Med. When I go through the reference list, I will add the variation to the dataset so it is corrected next time. Second, if the journal name is in the dataset, it highlights correct names, which means that I know at a glance that the journal name is correct and I do not have to look it up (see #5 above).

It is true that the names of some of the more frequently cited journals become familiar over time but there are thousands of journals and even with the frequently cited ones with which I am familiar, correcting an incorrect name takes time.

It is important to remember that time is money (profit) and that the less time I need to spend looking up journal names, the more profit I make.

After I run the Journals macro, I open the Journals Manager (see above) and I go through the reference list, doing whatever editing is required and fixing what needs fixing that my macros didn’t fix. Because of the current size of my journals datasets, there aren’t usually many journal names that are not highlighted. When I come to one that is not highlighted either green (indicating it is correct) or cyan (indicating it was incorrect but is now correct), I look up the name and abbreviation in the National Library of Medicine online catalog and other online sources. When I locate the information, I add it and the most common author variations (based on my experience editing references for more than 30 years) to the five datasets via the Journals Manager.

I take the time to add the journal and variations because once the variations have been added, I’ll not have to deal with them again. Spend a little time now, save a lot of time in the future.

In addition to editing the references for format and content, I also keep an eye out for those that need to be removed from the reference list and placed in text — the personal communication–type reference — and for those that need to be divided into multiple references. When I come across one, I “mark” it using a comment. For example, using the Insert Query macro (which is discussed in the later essay The Business of Editing: The AAE Copyediting Roadmap X), I insert the comment shown below for unpublished material:

Query for Unpublished Material

When I come to the in-text callout during the manuscript editing, I move the reference text to the manuscript, delete the callout and the reference, and renumber using the Reference # Order Check macro (which is discussed in the later essay The Business of Editing: The AAE Copyediting Roadmap VIII).

Now that the Journals macro has been run and the references edited, the next stop on my road is the search for duplicate references, which is the subject of The Business of Editing: The AAE Copyediting Roadmap VII.

Richard Adin, An American Editor

March 20, 2017

The Business of Editing: The AAE Copyediting Roadmap V

I am now nearly at the point where I actually begin editing the manuscript itself. I’ve created a stylesheet and cleaned the document (see The Business of Editing: The AAE Copyediting Roadmap II), and tagged the manuscript by typecoding or applying styles (see The Business of Editing: The AAE Copyediting Roadmap III), and inserted bookmarks for callouts and other things I noticed while tagging the manuscript (see The Business of Editing: The AAE Copyediting Roadmap IV). Now it is time to create the project- or client-specific Never Spell Word dataset and then run the Never Spell macro.

Never Spell Word (NSW) lets me create project- or client-specific datasets. If I know, for example, that the client prefers “distension” to “distention,” I can, using NSW mark every instance of “distension” with green highlighting, which tells me that this is the correct spelling, and change every instance of “distention” to “distension,” which change will be made with tracking on and then highlighted in cyan to visually clue me that a change has been made (I can choose to make the changes with tracking off, but that is not something I ever do).

Tip: It is important to remember that the tab names, such as “Drugs,” in the Never Spell Manager, and in nearly all managers, can be changed to whatever name best suits your editorial business. Use the Change Tab Name button. The tab names that show when you install EditTools are placeholder names.

Highlighting is integral to EditTools. Highlighting attracts the eye and by using different highlight colors, I can, at a glance, tell whether I need to review or check something. Because of the types of books I work on, it is not unusual for Word to put a red squiggle under a word or phrase that is actually correct — it just isn’t in Word’s dictionary. Most editors would stop and check the squiggled word, but, for example, if I see it is highlighted in green, I know that it is correct and I do not have to check it — I know I have already checked the word and then added it to a tab in the Never Spell Manager.

The point is that NSW enables me to mark (via highlighting) items that are correct, items that need to be checked, items that are correct but may not be capitalized correctly, items that should never be spelled out, and items that should always be spelled out according to the stylesheet and client instructions. Some examples are shown in the image below (you can make this image, as well as other images in this essay, larger by clicking on image):

A Dataset in Notepad++

The datasets are text files. The above image shows a project-specific dataset that was opened in Notepad++ (Notepad++ is an outstanding free text editor that is a replacement for Microsoft’s Notepad). The * and $ preceding an entry indicate case sensitive and whole word only, respectively. For example (#1 in image above),

*$ms | cyan -> msec

means: find instances of “ms” as a lowercase whole word (in other words, “ms” but not, e.g., “forms” or “MS”) and change it to “msec.” What I will see in the manuscript is this:

Change Example

The cyan tells me at a glance that this has been changed by NSW. If the change is incorrect for some reason, I can reject the change, which is why I do it with tracking on.

I use NSW as a way to implement stylesheet decisions, as well as client preferences. An example is “F/M” (#2 in above image). The nice thing is that I do not need to format the entry. The Never Spell Manager, shown below, makes it easy — I just fill in the blanks, and if appropriate check one or both checkboxes, and click Add. I can easily correct an erroneous entry by double-clicking on it, correcting it, and clicking Update. And the Manager stays open and available until I click Close. With this Manager, I can make additions to any of the tabs.

The NSW Manager

I also use NSW as a way to mark things I already know are correct or incorrect and need changing so that I spend less doing spell checking tasks and more time doing higher-level editing. When I come across a new term, such as the name of a new organism, if appropriate I add it to one of the NSW datasets after I verify it so that next time it will be highlighted and, if necessary, corrected. For example, authors often type ASO3 rather than the correct AS03 (the first is the letter O then second is a zero). Having come across that mistake often, I added the instruction to change ASO3 to AS03 to my Commonly Misspelled Words dataset. Another example is the word towards. The correct spelling in American English is toward, so I added the word towards and the correction (toward) to an NSW dataset.

When I run the NSW macro, I am actually running more than what is contained in the Never Spell Words dataset — I can choose to run one, some, or all of the datasets represented by the tabs in the Never Spell Manager shown here:

Choosing Datasets

In this example, I am running all of the datasets except the Confusables dataset.

Tip: Using only the datasets that are applicable to the project allows the NSW macro to complete faster. This is especially true as your datasets grow.

I run the NSW macro over the main text; I do not run it over the reference list. My habit is to move the reference list to its own document after I style/code and do cleanup, but before I run NSW. The NSW macro requires the placement of a bookmark called “refs” at the point in the manuscript where I want the macro to stop checking text. Consequently, I do not have to move the reference list to a separate file if the list is after the material I want the macro to go over — I can just put the bookmark in the reference list head or in a line that precedes the list. I move the reference list to its own file because my next step will be to run the Journals macro, and that macro works faster and better when the reference list is in its own file, especially if the dataset is large as mine are (e.g., my AMA style dataset runs more than 212,000 entries).

As I said earlier, I keep the Never Spell Manager (shown above) open while I edit. Doing so lets me add new material to the various datasets as I edit the manuscript. The idea of the multiple tabs is to be able to have specialized datasets that are usable for all (or most) projects; for me, only the Never Spell Words dataset is project/client specific. When I come across the name of a study, for example, such as AFFIRM (Atrial Fibrillation Follow-up Investigation of Rhythm Management), I enter the information in the Studies/Trial tab dataset, because that is information that is neither project nor client specific.

I also keep open the Toggle Managers because when I come across something like the AFFIRM study I want to enter it into the appropriate Toggle dataset, too. But the Toggle macro is the subject of a later Roadmap essay (The Business of Editing: The AAE Copyediting Roadmap VIII).

After running NSW, it is time to turn attention to the reference list. The Journals macro and the Wildcard Find and Replace macro are the subjects of The Business of Editing: The AAE Copyediting Roadmap VI.

Richard Adin, An American Editor

March 15, 2017

The Business of Editing: A Page Is a Page — Or Is It?

When editors speak of how to charge for a project, they are referring to one of these three methods: by the page, by the project, or by the hour. Although editors speak in these terms (page, project, hour), the truth is that every calculation comes down to the page.

Editing is based on the page

Regardless of an editor’s method of calculating a fee, a fundamental question that every one of them must answer — consciously or subconsciously — is this: How many pages can I edit per hour in a project like the one proffered? The number of pages you can edit in an hour sets the basis for your rate and establishes your profitability. (I am guessing that your client does not have an unlimited editing budget; in 33 years of editing I have yet to be told, “My editing budget is unlimited, so take as long as you need.”) Thus the question: What is a page?

Even if you charge by the hour, you are assuming that you can edit a certain number of pages per hour. You might assume, when estimating a project, that you can edit 10 pages an hour and price the project accordingly, but if you soon discover you can only edit two pages an hour, you are likely to find that you are losing your shirt on the project. (It is to “prevent” this scenario from happening that editors claim they charge by the hour. It is the rare client, however, who will pay you for an unlimited number of hours, even if you explain the project’s difficulties. Charging by the hour often does not prevent the described scenario. In my experience, it is smarter to charge by the page or project, and when you are enmeshed in a money-losing project, you should figure out how to make it and subsequent similar projects profitable.)

Pages are the basis not only for the fee but also for the schedule. Editors want to know a “page count,” even if they are charging by the hour, so that they can calculate how much time a project will take — a critical consideration for scheduling purposes. That brings us full circle to the assumption that the editor can edit x pages per hour.

Pages are the foundation of editing.

What is a page?

Defining a page is critical to editor–client communication. Experienced editors know that; less-experienced editors soon learn it. Yet rarely is there a true discussion of what constitutes a page. Defining a page is also critical to an editor’s profitability. The defined page forms the foundation on which many further decisions, such as what to charge, are based.

When people in the editing business ask how a page is defined, they usually ask the question obliquely — What should I charge? How should I calculate what to charge? — instead of directly: What constitutes a page and why? A common response to the charge question is to ask how long the manuscript is, with the expectation that the response will be x words. No one questions whether counting words is the best method for calculating length. (The most common response is simply to toss out a number such as $25/hour, citing some survey as authority — without anyone ever wondering about or questioning the legitimacy of the survey — or $1/page, without any discussion of what constitutes a page or of the legitimacy or basis of the stated per-page price. The most financially successful editors have calculated their required effective hourly rate and have based their pricing on that, not on what colleagues claim is “standard” pricing, or on what some very unscientific survey claims is “standard.” [For information on calculating your required effective hourly rate, see the series Business of Editing: What to Charge. Some other essays on pricing are So, How Much Am I Worth?, The Business of Editing: Why $10 Can’t Make It, and Business of Editing: The Quest for Rate Charts.])

When the discussion does get to the question of calculating a page, the most common response is 250 words = 1 page.

This equivalency has been around for decades, and its continued legitimacy is based on that longevity. The equivalency came about in the days of typewriters. Editing was done on paper, and manuscripts were required (unless, of course, the author ignored the requirements, as authors so often do today) to be single-sided, double-spaced, with one-inch margins. The typewriter font was universally Courier 12-point, a monospaced font that ensured that all spacing between words and characters was uniform. Unlike today’s word-processing programs, authors couldn’t create all the little variations that currently haunt manuscripts and editors (such as adjusting kerning to squeeze more words on a line, or using multiple font sizes to “format” the manuscript). Most manuscripts were straight text as far as the editor was concerned. (There was often a separate editor who dealt with figures and tables.) So, for decades 250 words = 1 page was a good equivalency.

Personal computers and word-processing software began taking over in the 1980s, and things likewise began to change for editors. A job that was once divided among specialists became a unitary job to be done by one editor. Where previously a typesetter would clean up files, that task became the editor’s job. Everything changed except the equivalency. Authors tried “typesetting” their manuscripts so that editors and publishers would “know” what the author wanted the text to look like. And editorial headaches became more frequent and more enduring.

Even as the workload increased and the pay stagnated, the equivalency carried on. It is still a living, breathing thing — just ask on any editors’ forum what constitutes a page and their answers will predominantly be 250 words = 1 page.

It does and it doesn’t

Let’s begin with this: Regardless of what the formula is, the equivalency is arbitrary. It isn’t, today, scientifically based. It is, however, a method to increase or decrease the amount a client pays an editor.

Consider this example: According to Word, the chapter before you, as submitted by the author, is 118 manuscript pages, has 26,967 words, 161,167 characters without spaces, and 187,327 characters with spaces. Few clients or editors will accept the 118 manuscript pages as the chapter size. After all, who knows, for example, what spacing and font sizes are used throughout the chapter? And either (and both) of those can affect true count. Using the standard equivalency, this chapter is 108 manuscript pages. Most editors would go no further. But suppose the page count were based on 1600 characters with spaces? The page count is now 118 (rounding fractional page results [117.08] up); and if it were based on 1600 characters without spaces, it would be 101 manuscript pages.

The key

Note that the number (250 words, 1600 characters) used is, today, somewhat arbitrary. The 250 words formula has historical precedent, but not necessarily modern-day relevancy. It is not unusual to find publisher and packager clients, for example, who define 1 page as 300 words or 1800 characters without spaces. There is no magical method for determining the number. The key to this discussion and to the equivalency is to determine the factors, including profitability, that you consider in deciding what should constitute a page for your particular type of work. And then you should select a number — and method of counting — that will represent those factors and that you can articulate and defend to the client as best corresponding to the intricacies of the manuscript.

There are two noteworthy points. First, how a page is defined affects the page count. Second, the method used counts only the text items that Microsoft Word can count; for example, it does not include the 22 graphic images that accompany the chapter, nor does it take into account the difficulty of the edit (the light vs. medium vs. heavy concept), and it doesn’t include in the count all the formatting and unformatting tasks the editor is expected to perform.

And another problem is…

Another problem with the word count method — and a major problem for me because of the type of books I generally edit — is that Microsoft Word counts words such as N,N-diethyl-3-methylbenzamide as being the same as pain; that is, Word counts N,N-diethyl-3-methylbenzamide as one word, just as it counts pain as one word. But the two are not, wordwise, equivalent — at least not to my thinking. A character count, however, treats N,N-diethyl-3-methylbenzamide (29 characters) and pain (4 characters) equivalently — that is, 1 character = 1 character.

If I were editing a text-only novel, the 250 words = 1 page equivalency might work fine. From my perspective, everything would balance out; the fact that the average English word is five characters would probably offer a fair equivalency. But when a manuscript is more complex and the tasks are more involved (e.g., the chapter has 600 references that need to be formatted and checked; reference callouts need to be changed from inline to superscript; dozens of compounds have to be checked against The Merck Index), then the equivalency falters in its purpose and a different formulation of the equivalent of one manuscript page has to be devised.

(It is worth noting that many editors will respond that these factors and problems should be, and are, addressed by the rate rather than the count. That works well is your clients are amenable to paying a high rate; my experience has been that it is significantly more difficult to get a client to pay a higher rate than to get the client to accept a method of counting pages that differs from the standard equivalency. It is also important to understand that not all factors are equal and that the weight to be given a particular factor can vary from manuscript to manuscript. Finally, it is easier to set a variety of rates when an editor’s clientele are individual authors rather than publishers and packagers. Publishers and packagers expect a long-term many-project relationship and a single rate and method of calculating a page for all projects without regard for complexity or other factors.)

Don’t get me wrong. If you are editing chemistry textbooks, have investigated page equivalencies, and are satisfied with the “standard” equivalency formulation, use it. The key is that you have considered whether the “standard” equivalency sufficiently accounts for the types of problems you encounter.

There is no single, set equivalency that works well in all situations. In my editing, I deal with words of the N,N-diethyl-3-methylbenzamide type and with lots of references and figures. Consequently, a character-based formula works best for me. This I have documented many times over the years. I have also learned that I can ignore graphic images when doing my count because the formula I use is sufficient to account for them; no additional assessment or count is needed. I also know, from experience, that using a character count gives me a more accurate idea of how many pages I can edit in an hour than a word count does.

The bottom line is…

Editors need to act like good businesspeople. They must evaluate how best to calculate a page equivalency for their type of work. Editors should not automatically assume that the “standard” equivalency is the fairest option for them. When determining the equivalency they’ll use, they need to keep in mind all the variables that are missing from Word’s count, and also all the tasks they will be expected to perform in addition to content editing.

Richard Adin, An American Editor

March 13, 2017

The Business of Editing: The AAE Copyediting Roadmap IV

I ended The Business of Editing: The AAE Copyediting Roadmap III with a discussion of Style Inserter and with my repeating an earlier comment about how the smart editor creates the wheel once and reuses it instead of recreating the wheel with each new project. (I do understand that if you are doing one-off projects it is more difficult to create a reusable wheel, but there is a lot to editing that is repetitive across projects, even across one-off projects.)

One of the tools that I tried to use in my early years of editing was Word’s bookmarks. I would come across something that I thought I was likely to need to relook at as editing progressed and I would add a bookmark. The problem was — and remains — that Word’s bookmarking is primitive and not all that helpful. Of what value is a list of dozens of bookmarks that are nondescriptive, can’t be organized, and are so similar that it is hard to tell which is the one you need to go to? Before EditTools’ Bookmarks, I found using bookmarks to be very frustrating. (For previous essays on bookmarks, see The Business of Editing: Using & Managing Bookmarks and Bookmarking for Better Editing.)

EditTools’ Bookmarks are much more powerful and usable than the standard Word bookmarks. A detailed description of the Bookmarks macro is available at the wordsnSync website; here I want to address how I use bookmarks in editing.

My first use is to mark callouts of display items in the text. It is pretty easy to recall that Figure 1 was called out in the text when a manuscript has only one figure, but if a chapter is 50 pages long and has ten figures and six tables (and perhaps other display items), using bookmarks makes it easy to confirm that all are called out in the text. I insert a bookmark at the first instance of an in-text callout of a figure or table (see arrows [#1] in figure below) (you can make this image, as well as other images in this essay, larger by clicking on the image).

EditTools' Bookmarks dialog

EditTools’ Bookmarks dialog

When I get to the section of the manuscript that has, for example, the figure legends, I match the callout bookmark to the figure legend — that is, when I am styling/coding the legend to Figure 3, I make sure that there is a callout in the text for Figure 3 by looking at the above dialog to see if there is a bookmark fig 003. If there is, I use Move Bookmark (#2 in above figure) to move the bookmark from the callout to the legend. I do this is so that when I am editing the manuscript (rather than styling/coding), I can go from a figure’s callout to its legend (I insert a temporary pause bookmark at the callout so I can easily return to that spot), edit the legend and make sure the figure and the text are aligned (addressing the same issue), and then return to where I had paused editing. If there is no bookmark for the callout, I search the text for it; if it remains unfound, I query the author. Using bookmarks at this stage of the editing process gives me an easy way to check that all display items are called out in the text.

I also use bookmarks to mark something that I think I may need to check later. For example, if I see that references are going to be renumbered and that Table 1 includes reference callouts, I can insert a bookmark similar to that shown here (#3):

Descriptive bookmarks

Descriptive bookmarks

I keep the Bookmarks dialog open while I edit, so reminders like this are always visible and it is easy to add a bookmark. If a bookmark will be used repeatedly, like the “fig” bookmark, I create a custom button (#4) so that I can insert it with a single click, rather than repeatedly typing. (See Bookmarks at wordsnSync for more information about the custom buttons.)

There are two other things I do with bookmarks that help with editing. First, when I have edited a display item, such as a table, I rename the bookmark — I do not delete it because I may need to return to the item and this is an easy way to navigate. Renaming is easy and I have chosen a default naming convention as shown below (#5).

My default renaming convention

My default renaming convention

This renaming convention tells me that I have edited the display item, which I would not have done if the display was not called out in the text. It also leaves me a bookmark that I can use to navigate to the item in case there is a need to make a correction or add a comment that surfaces after additional editing. (You can create your own default naming convention or manually rename the bookmark using the Bookmark Rename dialog shown below. Using the Rename dialog you can create standardized renaming conventions and choose among them which is to be a default. You can choose a prefix or a suffix or both, as I did. The advantage to creating a default is that in three mouse clicks the renaming is done: one to select the bookmark to rename, one to click Rename, and one to click OK — quick and easy. Try renaming a bookmark using Word’s Bookmark function.)

Bookmark renaming dialog

Bookmark renaming dialog

Bookmarks that I no longer need, I delete. I like to keep lists that I need to check to a minimum. Thus, for example, once I have rechecked the reference numbering in Table 1, I will delete the bookmark that acts as both a reminder and a marker.

In my editing process, I use bookmarks extensively, especially as reminders to do certain tasks before I decide that the editing of the manuscript is complete — essentially a to-do list for the document I am editing. Unlike Word’s Bookmarks, EditTools’ Bookmarks lets me use plain-English descriptive bookmarks and organize them. Note that the bulleted “Recheck” bookmark appears as the first entry and the “x” (to signify edited) renamed bookmark has moved to the end of the list.

The next step in my editing path is to create my project-specific Never Spell Word dataset. When I first begin a project, this dataset may have only a few items in it, but it grows as the project progresses. Never Spell Word is the subject of The Business of Editing: The AAE Copyediting Roadmap V. Never Spell Word is a key item in my editing roadway.

Richard Adin, An American Editor

March 8, 2017

The Decline & Fall of Editorial Quality

Three events occurred in the past several weeks that started me thinking about the decline and fall of editorial quality. One was a job offer I received; the other two were book reviews I read. I begin with the book reviews.

The first review was in The Economist (February 18, 2017, pp. 69–70). It was a review of the soon-to-be-published biography Jonathan Swift: The Reluctant Rebel by John Stubbs (W.W. Norton, 2017). What alarmed me was this:

However, Mr. Stubbs’s account has a few surprising factual errors — the battle of the Boyne, arguably the best-remembered event in Irish history, is dated as 1689, a year early, and the medieval town of Kilkenny is placed “60 miles to the south-east” of Dublin (which would put it smack in the middle of the Irish Sea). [p. 70]

A few days later I was reading the essay “Can We Ever Master King Lear?” by Stephen Greenblatt (The New York Review of Books, February 23, 2017, pp. 34–36), which was reviewing The One King Lear by Sir Brian Vickers (Harvard University Press, 2016). Greenblatt wrote:

…But perhaps something else is occurring here, some dark nemesis signaled in this book perhaps by the absence of a bibliography, or by the scanty index, or by the startling number of errors made by someone who excoriates careless printers and proofreaders. Why did no one catch “schholar” and “obsreved”? Who allowed the book’s stirring peroration to assert that Shakespeare “had no reason to go back to his greatest pay”?

These typos, like tiny pebbles, are foretastes of the rocks that have come crashing through Vickers’s glass walls. For three weeks last May, Holger Schott Syme, a professor ay the University of Toronto, undertook…a detailed scholarly critique of The One King Lear….Syme’s appalled accumulation of entries…details an array of fundamental contradictions, misstatements, and errors throughout the book, including a disastrous miscounting of the number of pages in a text Vickers trumpeted as one of his crucial pieces of supporting evidence for Okes’s paper crisis. [p. 36]

On and on the review goes, highlighting the editorial problems.

The third event, the job offer, was a request that I personally edit a 3500-page medical manuscript that requires a “very heavy edit” and that I do so for less than 75% of my standard rate, calculated in a way that reduces that 75% to closer to 60%, and that I meet a tight deadline that would require editing 300 to 350 pages per week. (I suppose I should add that I was also required to typecode the manuscript and that there were lots of references, nearly all of which were in the wrong format and often incomplete, thus requiring me to look them up.) Of course, there was the admonition that I was “being asked to do this job because a high-quality edit is required” and the claim that the proffered fee was a “premium” rate.

I do not understand the thinking. Here are three separate events, three completely separate publishers, and three prestigious projects — two of which have editorially failed, the third of which will be an editorial failure. Thousands of books are published each year; only a handful are reviewed by The Economist or The New York Review of Books, both selective and well-respected book reviewers. The importance of these books to the literature of their fields is emphasized by their selection to be reviewed. The medical book, when published, will be a very costly book to buy and will serve as a reference for the subject matter area. All three books deserve and even require professional, high-quality editing, yet none received (or, in the case of the medical book, will receive) such editing because of the deadly combination of inadequate pay (which makes it difficult to hire a cream-level editor), too short a schedule (which pressures an editor to edit speedily, which means sacrificing quality; the shorter the schedule the greater the required quality sacrifice), and too many mechanical requirements that have to be performed by the editor, along with the editing, within the too short schedule and for too low pay.

What I don’t understand is why otherwise savvy business people are unable to grasp the idea that a high-quality edit is no different from any other high-quality artisanal job that cannot be performed by a robot or computer: to get a high-quality result you have to pay a fee commensurate with the quality level desired and allow the time needed to reach and maintain that level. In addition, you need to let the artisan focus on the quality edit and not sidetrack the editor with nonartisanal requirements.

Of particular concern, however, is that one of the problem books is from Harvard University Press. I have purchased books from Princeton University Press and Johns Hopkins University Press, to name but two university presses, that I have thought greatly overpriced for the poor editorial and/or production quality of the books (imagine, e.g., a $50, 168-page [including front and back matter] hardcover that comes without a dust jacket, along with other problems), but I also thought the books were outliers. Yet I am discovering that the more “prestigious” the university press, the more careful I need to be when buying a book published by that press.

Is it that these presses have grown too large and are under pressure to produce a profit as a consequence of their growth? When I first joined the editing ranks, university presses paid editors roughly 15% less than the commercial publishers paid and expected a higher-quality edit than the commercial presses. The lower compensation was balanced by a looser schedule and a true commitment to quality. In those days, editors sought to work for university presses because editors were more concerned about the artisanal aspects of editing than about the financial aspects.

That outlook changed as commercial publishers consolidated and began lowering/stagnating their fees and university presses tried to maintain the fee disparity. Editors by necessity became more oriented to business and less focused on being artisans. Where before an editor might edit three or four commercial projects followed by a university press project, as fees equalized (or came closer to equalization), the financial ability to take on university press projects lessened — the fees earned from editing commercial press projects no longer could carry the lesser fee of the university press because the spread was no longer sufficient.

We are beginning to see the fruits of these trends as an increasing number of error-riddled books are being published by both university and commercial presses. We are also beginning to see editors who have calculated and know their required effective hourly rate, and because they know their required rate, are turning down editing projects that do not offer sufficient compensation to meet that rate. Unfortunately, we are also seeing a parallel trend: the number of persons calling themselves editors is increasing and these “editors” advertise their willingness to work for a rate that is far too low to sustain life.

For publishers — university or commercial — this increase in the number of “editors” willing to work for a life-denying wage creates a problem. The problem manifests as a conflict between the requirement to minimize production costs — especially of “invisible” tasks like editing — and the desire to produce a high-quality-edit product. The conflict usually resolves in favor of cost-cutting, which will ultimately hurt the publisher’s bottom line, especially if the publisher begins to develop a reputation for poor-editorial-quality books, as the pool of book-buyers grows smaller and more discerning.

As long-time AAE readers know, I buy a lot of books (for an idea of how many I buy, take a look at the On Today’s Bookshelf series), but I have become wary of buying books from certain presses. Because of poor editorial quality, I certainly won’t be buying Jonathan Swift: The Reluctant Rebel or The One King Lear. Would you buy books from publishers known to skimp on editorial quality?

Richard Adin, An American Editor

March 6, 2017

The Business of Editing: The AAE Copyediting Roadmap III

A manuscript is generally “tagged” in one of two ways: by applying typecodes (e.g., <h1>, <txt>, <out1>) or by applying styles (e.g., Word’s built-in styles Heading 1 and Normal). My clients supply a list of the typecodes they want used or, if they want styles applied, a template with the styles built into the template. Occasionally clients have sent just a list of style names to use and tell me that, for example, Heading 1 should be bold and all capitals, leaving it to me to create the template. The big “issue” with typecoding is whether the client wants both beginning and ending codes or just beginning codes; with EditTools either is easy. Some clients want a manuscript typecoded, but most clients want it styled.


If the client wants typecoding, I use EditTools’ Code Inserter Manager (shown below) to create the codes to be applied. Detailed information on Code Inserter and its Manager is found at wordsnSync. I will focus on EditTools’ Style Inserter and its Manager here because that is what I use most often. Code Inserter and Style Inserter and their Managers work very similarly. Describing one is nearly a perfect description of the other. (You can make an image in this essay larger by clicking on the image.)

Code Inserter Manager

Code Inserter Manager

Style Inserter

Style Inserter relies on a template. Usually the client provides a template, but if not, the client at least provides the names of the styles it wants used and a description of the style (e.g., Heading 1, All Caps, bold; Heading 2, title case, bold; etc.) and I create a template for the client. Occasionally the client uses Word’s default styles. Once there is a template, I open Style Inserter Manager, shown below, and create styles that the Style Inserter macro will apply.

Style Inserter Manager

Style Inserter Manager

As you can see, Style Inserter Manager gives me a great deal of control over the style and what it will look like. When styles are applied in Word, one has to go through several steps to apply it. Style Inserter is a one-click solution. The information I entered into the Manager is translated into the Style Inserter macro (shown below). I organize the dialog how it works best for me and keep it open as I style the manuscript. A single click applies the style and can move me to the next paragraph that requires styling.

Style Inserter

Style Inserter

(If you do typecoding, you can tell the Code Inserter Manager whether you need just beginning codes or both beginning and ending codes. Like Style Inserter, once you have set up the coding in the Manager, you only need a single click to enter a code. As shown below, the macro looks and acts the same as Style Inserter. You do need a second click to enter an ending code because it is not always possible to predetermine where that end code is to be placed.)

Code Inserter

Code Inserter

Take a look at the Style Inserter Manager shown earlier. There are several formatting options available but there are two I want to especially note: Head Casing (#A in image) and Language (#B).

I am always instructed to apply the correct capitalization to a heading. It is not enough that the definition of the style applied to the head includes capitalization; the head has to have the correct style applied and the correct capitalization. If None is chosen, then however the head is capitalized in the manuscript is how it remains. If the head should be all capitals, then I would choose Upper from the drop down list (shown here):

Head Casing dropdown

Head Casing dropdown

Whatever capitalization style I select will be imposed on the head as part of applying the style. No extra steps are required once the capitalization requirements are made part of the style in the Manager. Title case capitalization is governed by the Heading Casing Manager, which is found in the Casing menu on the EditTools toolbar.

Head Casing

The Heading Case Manager (shown below) has two tabs: Head Casing and Words to Ignore. In the Head Casing tab you enter words or acronyms that are to always be all capitals or all lowercase. In addition, you indicate if that “always rule” is to be ignored. The Words to Ignore tab is where you list words that should be ignored when casing is applied, such as Roman numerals and symbols or acronyms like “miRNA”. Thus, for example, even though the instruction is that the head is to be all capitals, the “mi” in “miRNA” will remain lowercase. This works the same in the Code Inserter Manager.

Head Casing Manager

Head Casing Manager

Setting the Language

The Language option (#B in the Style Inserter Manager image above) is also important. One of the frustrating things for me is when I am editing and I realize that the authors (or some gremlin) set the paragraph’s language as Farsi and when I correct a misspelling it still shows as a misspelling because I am using American English. The Language option lets me choose the language I want applied (see image below). Selecting the language from the dropdown (here “English U.S.”) and also checking the Language box, will incorporate into the style that will be applied by Style Inserter the instruction to set the language to what I have chosen — overriding the language attribute that is present in the manuscript.

Language Option in Style Manager

Language Option in Style Manager

I make it a habit to incorporate the language instruction in every style. It saves me from wondering why the red squiggly line appears under a correctly spelled word, thereby removing an obstacle that slows editing (and lowers profitability). This works the same in the Code Inserter Manager.

Bookmarking While Styling

As I style the manuscript, I also insert bookmarks using EditTools’ Bookmarks. The bookmarks let me track elements of the manuscript. This is especially true because with EditTools’ Bookmarks I can create meaningful bookmarks, which is where we will start in The Business of Editing: The AAE Copyediting Roadmap IV.

Combo Click

But before we get to Bookmarks and the next essay, I want to mention another EditTools macro: Combo Click. I have found that when I do certain tasks I like to have certain macro managers open. Combo Click, shown below, lets me choose my combination of managers that I want open. Instead of having to click on each manager individually, I click on the combination in Combo Click and those managers open.

Combo Click

Combo Click

Creating the combinations is easy with the Combo Click Manager shown here:

Combo Click Manager

Combo Click Manager

Reusing the Wheel

The idea is to do as much work as possible quickly and with a minimum of effort. When I first set up, for example, Style Inserter, it takes a few minutes that I would not have to spend if I simply used the standard Word method. So editing chapter 1 may take me a few minutes longer than if I weren’t creating the Style Inserter dataset, but all subsequent chapters will take me less time than without Style Inserter. My point is that the smart businessperson looks at the macro picture, not the micro picture. EditTools works using datasets that the editor creates. Those datasets are the wheels — you create them and reuse them.

The next project I do for the client means I can load a previously created Style Inserter dataset and I can add those styles that are not already included and delete those that are no longer needed — a faster method than starting from scratch — and then save the new dataset under a new name.

The Business of Editing: The AAE Copyediting Roadmap IV picks up with Bookmarks and how I use them to help me remember to perform certain tasks and to navigate the manuscript.

Richard Adin, An American Editor

February 27, 2017

The Business of Editing: The AAE Copyediting Roadmap II

I rarely receive an entire manuscript in a single file. The manuscripts I work on are rarely single-author books; instead, each chapter is written by a different author or group of authors, and so the files are chapter oriented.

The Online Stylesheet

Even before I open the first chapter file, I begin preparing for the book. My first step is to create the online stylesheet (see Working Effectively Online V — Stylesheets [note: access to my online stylesheet is no longer available to the general reader]). I insert into the stylesheet any specific instructions for the project that I receive from the client. An example of specific instructions is shown here (you can make an image in this essay larger by clicking on the image):

Stylesheet Sample

Stylesheet Sample

I also add to the stylesheet any specific spellings or usages. For example, both “distention” and “distension” are acceptable spellings. If I know the client prefers one over the other, I add that spelling to the stylesheet. I also note the reference style to be used (usually with a quick note such as “references: follow AMA 10”). If there are variations, I will insert a note regarding the variations along with examples.

It is important to remember the purposes of the stylesheet. First, it is to help me be consistent throughout the project. It is not unusual to encounter a term or phrase in chapter 2 that I do not see again until chapter 10. Second, it is — at least in my business — a way for the client to observe my decision making. My stylesheets are available to my clients 24/7/365 and I encourage clients to review the stylesheets early and often. Most do not, but there have been occasions when a client has done so and has noted that they would prefer a different decision from the one I made. When the client takes the time to look at the stylesheet and advise me of a change they would like, it is easy to implement the change.

Third, the stylesheet is for the author. A well-done stylesheet can subliminally tell an author how good an editor I am. The author can see my diligence, and if the author has a preference (e.g., “distension” rather than “distention”), the author can communicate it and see that the change is made. Fourth, the stylesheet is for the proofreader. It tells the proofreader what decisions have been made and what are the correct spellings, and it gives myriad other information — depending on how detailed I make the stylesheet. It does neither the client nor the author nor me any good if the proofreader comes upon something the proofreader flags as an error because the proofreader doesn’t have a copy of my latest stylesheet in which the answer as to whether it is an error can be found (which is why my clients can both review the stylesheet online 24/7/365 and download the latest version — current to within 60 seconds of an entry having been made — 24/7/365).

Fifth, and finally, should I be asked to edit the next edition of the book, I can open the archived copy of the stylesheet and merge it into the stylesheet for the new edition. This enables consistency across editions, should that be something the client desires. And if the client’s schedule doesn’t accommodate mine, requiring the client to assign the new-edition project to another editor, either I or the client can access the archived stylesheet and provide a copy to the new editor. The importance here is twofold: first, it may give me a head start in the running to be the editor of the next edition; and second, it is good customer service and makes it easy for the client to think of me for other projects.

Once I have set up the stylesheet, it is time to tackle some of the other preediting tasks.

Cleaning Up the Manuscript

The first preediting tasks performed directly on the manuscript are aimed at cleaning up the manuscript. I do the cleanup before I do anything else on the manuscript. Cleanup means doing things like eliminating extra spacing and line breaks, and changing soft returns to hard returns or spaces. To clean up the manuscript, I use some of the macros found in The Editorium’s Editor’s Toolkit Plus 2014. For my methodology, I only use three of the Toolkit’s macros: FileCleaner, NoteStripper, and ListFixer. The only Toolkit Plus macro I use on every manuscript is ListFixer to convert Word’s autonumbered/lettered lists to fixed lists; the other two macros I use as needed.

EditTools has several macros that I use during the cleanup phase. The first macro I run is Delete Unused Styles. Unfortunately, Microsoft doesn’t permit some styles to be deleted, but this macro reduces style clutter. (Caution: If you are applying a template to the manuscript, do not run this macro after the template has been applied. Doing so may cause template styles to be removed.) I have found that running this macro first makes it easier to deal with the often myriad author-created styles, especially the ones that are attributes. A major shortcoming of Microsoft Word is that it encourages users to use styles but doesn’t make it easy for most users to understand how to use styles properly.

Another significant problem with Word styles occurs when a compositor has taken a typeset file and converted it to a Word document for editing. Every time the typesetter modified line or word spacing, for example, the modification shows as a new style. Similarly, when applying attributes like bold and italic to an already-styled word or phrase, Word creates a new style that is identical to the already-applied style except that it incorporates the attribute. Consequently, the manuscript has dozens of overriding styles that need to be removed. Although Delete Unused Styles won’t remove these types of styles (because they are being used in the document), by eliminating unused styles, I need to deal with fewer styles.

After Delete Unused Styles, I run EditTool’s Cleanup macro. The Cleanup macro lets me customize what I want done and also enables me to create a project- or client-specific cleanup dataset that complements the master Cleanup dataset.

The Cleanup macro is intended for those things that I would cleanup in every manuscript. Look at the following image of the Cleanup Manager:

Cleanup Manager

Cleanup Manager

Fixes include changing two spaces to one space (in the image: [space][space]->[space]) and changing a tab followed by a paragraph marker to just the paragraph marker (in the image: ^t^p->^p). In other words, the routine cleanup. The macro also lets me link to a project-specific (“specialty”) cleanup dataset that contains less-generic items for fixing, such as replacing “ß” with “β”. (It is not necessary to have a project-specific cleanup dataset; anything that I would place in the specialty dataset I could put, instead, in the general cleanup dataset. I use the specialty datasets to avoid the problems that can occur if one client wants, for example, “ / ” [i.e., a space on each side of the slash] and another wants “/” without spacing.)

As we all know, different authors tend to have their own “peculiarities” when it comes to how something is styled. Those are the little things that one discovers either during a scan of a chapter or while editing. For example, in a chapter an author may style a measure inconsistently, first as “> 25” and then as “>25.” These are the types of things for which I previously used Word’s Find & Replace function. Because I scan the material before styling/typecoding, I often catch a number of these types of problems. To correct them, I use the Sequential F&R Active Doc tab of the Find & Replace Master macro, shown here:

Sequential F&R Active Doc

Sequential F&R Active Doc

This macro lets me do multiple find-and-replaces concurrently — I can select some or all of them. This macro lets me mix and match and I can save the find and replace items in a dataset for repeat use. The difference between the Sequential F&R Active Doc and the Cleanup macro is that the Cleanup macro is intended for things that are found in multiple manuscripts, whereas the Sequential F&R Active Doc is intended to replace Word’s Find & Replace.

After running Cleanup (and possibly Sequential F&R Active Doc), I may run Wildcard Find and Replace. It depends on what I noticed when I scanned the manuscript. I have created dozens of wildcard strings that I can run individually or as part of a script that I created. And if one of my already-created strings won’t work, I can create a new one. (Wildcard Find & Replace is discussed in this series in the future essay The Business of Editing: The AAE Copyediting Roadmap VI.)

After doing the cleanup routine, I then style/typecode (discussed in the next essay in this series, The Business of Editing: The AAE Copyediting Roadmap III) and insert preliminary bookmarks (see the future essay in this series The Business of Editing: The AAE Copyediting Roadmap IV). Then it is time to turn to Never Spell Word (which I’ll discuss in The Business of Editing: The AAE Copyediting Roadmap V).

Richard Adin, An American Editor

February 20, 2017

The Business of Editing: The AAE Copyediting Roadmap I

Over the years, I have been asked by “young” editors (by which I mean editors new to the profession or who have only a few years of experience) about how I approach an editing project. Also over the years I have discussed with colleagues how they approach a project. What I have learned is that as our experience grows and as we adapt to the types of projects we edit, each editor creates his or her own methodology. But that is an unsatisfactory response to the question. Consequently, with this essay I begin a series of essays that discuss how I approach an editing project — that is, my methodology.

The Basics

As I am sure you can guess, much of my methodology is wrapped up in making as much use of methods focused on efficiency and productivity as possible, which in my case means a large reliance on my EditTools set of macros (for those of you who are unfamiliar with EditTools, you can learn about the macros that make up the package by visiting the wordsnSync website) and my online stylesheet (see Working Effectively Online V — Stylesheets [note: access to my online stylesheet is no longer available to the general reader]). I also rely on parts of The Editorium’s Editor’s Toolkit Plus 2014, primarily ListFixer and NoteStripper.

It is also important to keep in mind the types of projects I work on. Most of my projects are large medical books, ranging in size from 1500 to 20,000+ manuscript pages. This is important information because what works for nonfiction doesn’t necessarily translate well — in terms of efficiency or productivity — for fiction editing. As you read about my methodology, do not dismiss what I do by thinking it won’t work for you; instead, think about how it can be adapted to work for you — and, more importantly, whether it should be adopted and/or adapted by you for your editorial business.

Recreating the Wheel

No matter what type of manuscript you edit, the objective is (should be) to edit profitably. Profitable editing is possible only when we can make use of existing tools and adapt them to our needs. It makes no sense to reinvent the wheel with each project and it makes no sense to dismiss tools that others have created because the tools were created to work in a different environment. What does make sense is to create the wheel once and reuse it repeatedly and to take already-created tools and adapt them to what we do.

A good example is marketing. Smart marketers make note of what ads are the most effective at pushing a product or business and then think about how such ad can be adapted to promote their business or product. Years ago, when I was planning marketing campaigns for my editorial business, I would indirectly, as part of a conversation, ask clients (which included potential clients) about ads I had seen and thought effective. A positive response from clients would have me think about how to adapt the positively viewed ad for my business.

Looking at Tools

Which brings to mind another point. A common failing among editors is to look at a package of tools, decide that of all the tools in the package the editor can only make regular use of one or two of the tools, and thus the editor decides that the package is not worth buying. This, however, is faulty thinking.

In the early days of my editing career, I, too, thought like that. Then I kept seeing the same types of problems popping up in many of the manuscripts I worked on and it dawned on me that a package of tools I had dismissed had a tool in it that could solve some of these repeating problems in seconds. I realized that the package would pay for itself after just a very few uses of the one or two macros that I could use (thinking that the other macros had no value for me), and thereafter each time I used one of the macros, the package was making me a profit. I had to address the problems whether I used the package or not; the difference was the amount of time and effort I needed to solve the same problems with each new manuscript. By not using the package, I was reinventing the wheel for each project; by buying the package, I created the wheel once and reused it.

I also discovered something else. That tools I had dismissed as not usable in my business because of how I worked needed to be relooked at. Usually I would find that with a little tweaking in how I worked additional tools in the package would save me time and make me money. The lesson I learned was that the stumbling block may be how I work, not the package, and that sometimes it makes sense to modify my methods, that the “tool” knows better than me. By not using the package, I was not learning to tweak my methodology to make my business more profitable.

Artisan vs. Businessperson

Editors like to view themselves as more artisans than businesspersons. There is nothing wrong with such a perspective as long as one does not forget that editing is a business and must be approached as a business. The artisanship is the bonus we get for having a smooth-running business. We have discussed in multiple essays the ideas of being businesslike and profitability (see, e.g., The Commandments: Thou Shall be Profitable; The Twin Pillars of Editing; The Business of Editing: A Fifth Fundamental Business Mistake That Editors Make; and The Business of Editing: The Profitability Difficulty; for additional essays, search An American Editor’s archives), so I won’t rehash the idea again other than to say that if you are not profitable, you will find it difficult to be able to afford the artisanship aspects of editing — and the key to profitability is to reuse, not recreate.

Back to the Basics

The other important point to note about my business is that, with rare exception, I work only with publishers (either directly or indirectly through packagers); that is, I do not work directly with authors. The importance of this point is that publishers generally have set rules they expect to be followed by the editor. For example, I rarely have to ask whether it should be one to nine or 1 to 9 or whether it should be antiinflammatory or anti-inflammatory. Publishers follow established style guides, like the AMA Manual of Style and The Chicago Manual of Style, and if they deviate from the manuals, they do so in written form that is applied across a particular line. Working with publishers brings a high degree of consistency (although a packager can have its own additional twists) to the “mechanical” aspects of what editors do, something that is hard to achieve when working directly with authors on a one-off project.

Coming Up Next

With these caveats in mind, The Business of Editing: The AAE Copyediting Roadmap II will begin the discussion of how I approach to a manuscript.

(For an alternative approach to copyediting, watch Paul Beverly’s video “Book Editing Using Macros.” Paul’s approach is also macro reliant, but, I think, less efficient. Paul makes his macros available in his free book Macros for Editors.)

Richard Adin, An American Editor

February 6, 2017

The Cusp of a New Book World: The Sixth Day of Creation

(The first part of this essay appears in “The Cusp of a New Book World: The First Day of Creation;” the second part appears in “The Cusp of a New Book World: The Fourth Day of Creation.” This is the final part.)

Donald Trump is late to the game. Reshoring of industry has been happening, albeit quietly, for the past several years. Also late to the game are publishers, but increasingly reshoring is happening in the publishing industry. The problem is that publishing-industry reshoring is not bringing with it either a rise in editorial fees or relief from the packaging industry. If anything, it is making a bad situation worse. It is bringing the low-fee mentality that accompanied offshoring to the home country.

Reshoring in the United States has meant that instead of dealing with packagers located, for example, in India, editors are dealing with packagers in their home countries. Yet professional editors continue to face the same problems as before: low pay, high expectations, being an unwitting scapegoat. Perhaps more importantly, the onshore packagers are not doing a better job of “editing” — the publishers are offering onshore packagers the same editing fee that they were offering the offshore packagers, and the onshore packagers having to pay onshore wages have the same or lower level of editorial quality control as the offshore packagers.

There is nothing inherently wrong with the packager system; there is something inherently wrong with the thinking of publishers as regards the value of editing, with the system of freelance editing, and with packager editorial quality control. These problems are not solvable by simply moving from offshore to onshore; other measures are needed, not least of which is discarding the assumption that high-quality copyediting is available for slave wages.

Publishing is in a simultaneous boom–bust economic cycle. Profit at Penguin Random House in 2015, for example, jumped by more than 50% from its 2014 level to $601 million. Interestingly, print revenue in the publishing industry overall is rising (+4.8%) while ebook revenue is declining (−20%). Gross revenue from print is expected to remain steady through 2020 at $46 billion per year while ebook revenue continues to decline.

The key question (for publishers) is, how do publishers increase profits when revenues remain flat in print and decline in ebooks? This is the question that the Trumpian economic view ignores when it pushes for reshoring. Trumpian economics also ignores the collateral issues that such a question raises, such as, whether it does any good to reshore work that does not pay a living wage. The fallacy of Trumpian economics is in assuming that reshoring is a panacea to all ills, that it is the goal regardless of any collateral issues left unresolved; unfortunately, that flawed view has been presaged by the publishing industry’s reshoring efforts.

My discussions with several publishers indicates that a primary motive for reshoring is the poor quality of the less-visible work (i.e., the editing) as performed offshore — even when the offshore packager has been instructed to use an onshore editor. Consider my example of “tonne” in the second part of this essay and multiply that single problem. According to one publisher I spoke with, the way management insists that a book’s budget be created exacerbates the problems. The budgeting process requires setting the editing budget as if the editor were an offshore editor living in a low-wage country and without consideration of any time or expense required to fix editorial problems as a result of underbudgeting. After setting that editorial budget, the publisher requires the packager to hire an onshore editor but at no more than the budgeted price, which means that the packager has to seek out low-cost editors who are often inexperienced or not well-qualified.

Packagers — both onshore and offshore — try to solve this “problem” by having inhouse “experts” review the editing and make “suggestions” (that are really commands and not suggestions) based on their understanding of the intricacies of the language. This effort occasionally works, but more often it fails because there are subtleties with which a nonnative editor is rarely familiar. So the problem is compounded, everyone is unhappy, and the budget line remains intact because the expense to fix the problems comes from a different budget line. Thus when it comes time to budget for the next book’s editing, the publisher sees that the limited budget worked last time and so repeats the error. An endless loop of error is entered — it becomes the merry-go-round from which there is no getting off.

Although publishers and packagers are the creators of the problem — low pay with high expectations — they have handy partners in editors. No matter how many times I and other editorial bloggers discuss the need for each editor to know what her individual required effective hourly rate (rEHR) is and to be prepared to say no to projects that do not meet that threshold, still few editors have calculated their individual rEHR and they still ask, “What is the going rate?”

In discussions, editors have lamented the offshoring of editorial work and talked about how reshoring would solve so many of the editorial problems that have arisen since the wave of consolidation and offshoring began in the 1990s. Whereas editors were able to make the financial case for using freelancers, they seem unable to make the case for a living wage from offshoring. The underlying premise of offshoring has not changed since the first Indian company made the case for it: Offshoring editorial services is less costly than onshoring because the publisher’s fee expectations are based on the wage scale in place at the packager’s location, not at the location of the person hired to do the job. In the 1990s it was true that offshoring was less costly; in 2017, it is not true — and editors need to demonstrate that it is not true. The place to begin is with knowing your own economic numbers.

Knowing your own numbers is the start but far from the finish. What is needed is an economic study. There are all sorts of data that can be used to help convince publishers of the worth of quality editing. Consider this: According to The Economist, 79% of college-educated U.S. adults read only one print book in 2016. Wouldn’t it be interesting to know how many editors were part of that group and how many books, on average, editors bought and read? Such a statistic by itself wouldn’t change anything but if properly packaged could be suasive.

When I first made a pitch to a publisher for a pay increase in the 1980s, I included in the pitch some information about my book reading and purchasing habits. I pointed out that on average I bought three of this particular publisher’s hardcover titles every month. I also included a list of titles that I had yet to buy and read, but which were on my wish list. I explained that my cost of living had risen x%, which meant that I had to allocate more of my budget to necessities and less to pleasures like books. And I demonstrated how the modest increase I sought would enable me to at least maintain my then current book buying and likely enable me to actually increase purchases. In other words, by paying me more the publisher was empowering me to buy more of the publisher’s product.

(For what it is worth, some publishers responded positively to such a pitch and others completely ignored it. When offshoring took hold and assignments no longer came directly from the publisher, the pitch was no longer viable. Packagers didn’t have a consumer product and insulated the publisher from such arguments.)

With reshoring, imagine the power of such a pitch if it is made on behalf of a group. Reshoring in publishing is occurring not primarily because costs can now be lower with onshoring rather than offshoring, but because of editorial quality problems. And while it would be difficult to gain the attention of a specific empowered executive at an international company like Elsevier or Penguin Random House, it is easier to establish a single message and get it out to multiple publishers.

The biggest obstacle to making reshoring be advantageous for freelance editors is the reluctance of freelance editors to abandon the solo, isolated, individual entrepreneurial call that supposedly drove the individual to become a freelance editor. That used to be the way of accountants and doctors and lawyers, among other professionals, but members of those professions are increasingly banding together. In my view, the time has come for editors to begin banding together and for editors to have full knowledge of what is required to make a successful editorial career.

This sixth day of creation can be the first day of a new dawning — or it can be just more of the same. That reshoring has come to publishing is an opportunity not to be missed. Whether editors will grab for that opportunity or let it slip by remains to be seen. But the first step remains the most difficult step: calculating your rEHR, setting that as your baseline, and rejecting work that does not at least meet your baseline.

Richard Adin, An American Editor

February 1, 2017

The Cusp of a New Book World: The Fourth Day of Creation

(The first part of this essay appears in “The Cusp of a New Book World: The First Day of Creation;” the final part appears in “The Cusp of a New Book World: The Sixth Day of Creation.”)

The world of publishing began its metamorphosis, in nearly all meanings of that word, with the advent of the IBM PS2 computer and its competitors and the creation of Computer Shopper magazine. (Let us settle immediately the Mac versus PC war. In those days, the Apple was building its reputation in the art departments of various institutions; it was not seen as, and Steve Jobs hadn’t really conceived of it as, an editorial workhorse. The world of words belonged to the PC and businesses had to maintain two IT departments: one for words [PC] and one for graphics [Mac]. For the earliest computer-based editors, the PC was the key tool, and that was the computer for which the word-processing programs were written. Nothing more need be said; alternate facts are not permitted.)

I always hated on-paper editing. I’d be reading along and remember that I had earlier read something different. Now I needed to find it and decide which might be correct and which should be queried. And when you spend all day reading, it becomes easy for the mind to “read” what should be there rather than what is there. (Some of this is touched on in my essays, “Bookmarking for Better Editing” and “The WYSIWYG Conundrum: The Solid Cloud.”) So who knew how many errors I let pass as the day wore on and I “saw” what should be present but wasn’t. The computer was, to my thinking, salvation.

And so it was. I “transitioned” nearly overnight from doing paper-based editing to refusing any editing work except computer-based. And just as I made the transition, so were the types of authors whose books I was editing. I worked then, as now, primarily in medical and business professional areas, and doctors and businesses had both the money and the desire to leave pen-and-paper behind and move into the computer world. Just as they used computers in their daily work, they used computers to write their books, and I was one of the (at the time) few professional editors skilled with online editing.

The computer was my salvation from paper-based editing, but it also changed my world, because with the rise of computers came the rise of globalization. How easy it was to slip a disk in the mail — and that disk could be sent as easily to San Francisco as to New York City as to London and Berlin or anywhere. And so I realized that my market was no longer U.S.-based publishers; my market was any publisher, anywhere in the world, who wanted an American editor.

But globalization for me also had a backswing. The backswing came with the consolidation of the U.S. publishing industry — long time clients being sold to international conglomerates. For example, Random House, a publisher with a few imprints, ultimately became today’s Random Penguin House, a megapublisher that owns 250 smaller publishers. Elsevier was not even in the U.S. market, yet today has absorbed many of the publishers that were, such as W.B. Saunders and C.V. Mosby. This consolidation led to a philosophical change as shareholder return, rather than family pride, became the dominant requirement.

To increase shareholder return, publishers sought to cut costs. Fewer employees, more work expected from employees, increased computerization, and the rise of the internet gave rise to offshoring and the rise of the Indian packaging industry. So, for years much of the work that freelancers receive comes from packagers, whether based in the United States, in Ireland, in India — it doesn’t matter where — who are competing to keep prices low so work flow is high. And, as we are aware, attempting to maintain some level of quality, although there has been a steady decline in recent years in editorial quality with the lowering of fees. (One major book publisher, for example, will not approve a budget for a book that includes a copyediting fee higher than $1.75 per page for a medical book, yet complains about the quality of the editing.)

The result was (and is) that offshoring turned out to be a temporary panacea. The offshore companies thought they could do better but are discovering that they are doing worse and their clients are slowly, but surely, becoming aware of this. One example: I was asked to edit a book in which the author used “tonne” as in “25 tonnes of grain.” The instruction was to use American spellings. The packager for whom I was editing the book, had my editing “reviewed” by in-house “professional” staff who were, according to the client, “experts in American English” (which made me wonder why they needed me at all). These “experts” told me that I was using incorrect spelling and that it should be “ton,” not “tonne.” I protested but felt that as they were “experts” there should be no need to explain that “tonne” means “metric ton” (~2205 pounds) and “ton” means either “short ton” (2000 pounds) or “long ton” (2240 pounds). After all, don’t experts use dictionaries? Or conversion software? (For excellent conversion software for Windows only, see Master Converter.) Professional editors do not willy-nilly make changes. The client (the packager) insisted that the change be made and so the change was made, with each change accompanied by a comment, “Change from ‘tonne’ to ‘ton’ at the instruction of [packager].”

This example is one of the types of errors that have occurred in editing with the globalization of editorial services and the concurrent rise of packagers and lesser pay for editors. It is also an example of the problem that existed in the paper-based days. Although there is no assigning of fault in the computer-based system, when an error of this type is made, the author complains to the publisher, who complains to the packager, who responds, “We hired the editor you requested we hire and this is their error.” And the result is the same as if it had been marked CE (copyeditor’s error) in flashing neon lights. The editor, being left out of the loop and never having contact with the publisher becomes the unknowing scapegoat.

And it is a prime reason why we are now entering the sixth day of creation — the reshoring of editorial services, which is the subject of the third part of this essay, “The Cusp of a New Book World: The Sixth Day of Creation.”

Richard Adin, An American Editor

Next Page »

Blog at

%d bloggers like this: