An American Editor

March 25, 2015

The Business of Editing: Making Search & Replace Efficient & Profitable

Readers of An American Editor know by now my mantra: It has to be profitable! Profit is subject to myriad meanings; for me it means financial profit, whereas for other editors it has no financial meaning — rather, it must be soul satisfying; other editors have other meanings. The key is not how profit is defined but that profit represents what we, individually, seek when we take on an editorial project.

When profit is defined in financial terms, it sets the parameters for how an editor approaches a project. With financial profit as the motivator, the editor seeks to do the very best job he can do but in the least amount of time. It was with that in mind that EditTools was created; it was with that in mind that Editor’s Toolkit Plus and PerfectIt and myriad other time-saving macros were created.

Microsoft Word’s Find Feature

One of the “headaches” of the type of editing work I generally do (medical textbooks) is the use of acronyms and abbreviations (hereafter combined into “acronyms”). For most of my clients, the general rule is that an acronym must be used not less than four times in a document (i.e., once when it is defined plus three additional instances); if it is used fewer times, then it should always be spelled out. However, if it is used enough times that it is kept, then it should be defined only at first use and not again.

Applying that rule could be relatively easy in Word 2010, for example, because the Find feature, which opens the Navigation pane, can give you a count, as shown in the image below. (For a better view of an image in this essay, click on the image to enlarge it.)

Word 2010 Navigation Pane: Find

Word 2010 Navigation Pane: Find

In this example, I searched for the abbreviation for hemoglobin: Hb (#1). Word tells me that there are 11 “matches” (#2) and I can see the 11 matches in the pane (#3). If I click on one of the entries in the pane (yellow highlighted box), Word will take me to that item and highlight it (#4). This is fine for telling me how many times Hb appears in the document, but Word limits the value of this function in several ways.

One limit occurs if you want to look up something that also can be found in myriad other constructs. For example, the acronym Th is used to represent T helper-type cell and if I search for Th, I get a response like that highlighted in this image:

Too many results to preview

In other words, Find is not going to be helpful. Another problem with Word’s Find function is that it includes the whole document, you cannot tell it to search and report back only up to a particular point. Why is that a problem? In the example document I am using for this essay, there are 65 references, many of which include Hb, and none of which do I want in my count. I want in my count only the primary text. According to Word, the example document is 33 pages, but the main text fills only 21 of those pages. The result is that the count word gave me (11 instances of Hb; see #2 in the image “Word 2010 Navigation pane: Find” above) is not accurate for determining whether Hb should be retained as an acronym.

Word’s Find also has another failing, which for me is a big failing: In a long document with lots of acronyms, Word’s Find gives me no way to easily determine whether an acronym has been previously defined. So if I encounter an acronym on page 5, where it is first defined, and then again on page 12, where it is defined again, absent good memory or conducting another Find search, I am unlikely to recall/know that the acronym has already been defined. There are things I could do — for example, I could go to each instance of Hb via the Navigation pane and highlight each and scan the nearby text to delete redefinitions — but that takes time, especially if there are a lot of instances, and thus eats into profit. That would be especially true with those chapters I have to edit where the text portion alone runs more than 100 pages (and sometimes 200 or more pages).

An Alternative Method:
Enhanced Search, Count, & Replace

For me the best way to deal with the question of acronyms is with EditTools’ ESCR (Enhanced Search Count, & Replace) macro. The macro is found on EditTools’ Highlight menu (red arrow in image below).

ESCR on the Highlight Menu in EditTools

ESCR on the Highlight Menu in EditTools

Because what I want to do is find out how many times hemoglobin (Hb) appears in the main text, I temporarily added (Hb) to the document (as shown in the image below) so that I could do a search for the terms separately and together; usually I do not have to add the acronym or definition at the first instance because the author has done so, but sometimes the first instance of the acronym is undefined or the definition and the acronym are separated by intervening words, in which case I add the definition/acronym before using ESCR so that they form a single selectable search term. In this case, if I hadn’t done so, the search phrase would have been hemoglobin A (HbA), which was too narrow; such a search would have excluded, for example, HbSS and HbC, when what I want searched for is Hb regardless of how it is used. Before running ESCR, you need to select what you want it to look for. It can look for either a phrase or a single word. But remember that, like all macros, this is a dumb macro, so it can end up trying to look for things you do not want. But even that can be easily tackled with ESCR. Here I have selected the phrase hemoglobin (Hb) as the search phrase:

Selecting the Search Phrase

Selecting the Search Phrase

(Tip: I make it a point to select the phrase and copy it to memory. With this particular phrase there will not be a problem, but a phrase that includes terms separated by a comma are a problem, so by copying the “to find” phrase to memory, I can add it easily to the macro in correct form.)

With the phrase selected (and copied), I click ESCR. The macro produces a list of what it will search for, #1 in the below image. If you look at the list, you will see that the macro automatically separated the terms and created variations for singular and plural. Again, it is a dumb macro so it will do silly things, such as item 5, HBS. You can either let it go, or you can uncheck (#2) the item(s) you don’t want included in the search. I have found that for the most part it is as easy to leave it as to decheck it.

What the ESCR macro will search for

What the ESCR macro will search for

But sometimes the list is so long, especially if the search phrase has commas or parenthetical material, for example, BCNU (1,3-bis-(2-chloroethyl)-1-nitrosourea), that I add the terms I want and deselect all and select only those things I am interested in. The image below illustrates the problem that a search phrase like BCNU (1,3-bis-(2-chloroethyl)-1-nitrosourea) presents (in this instance, the macro came up with 36 variations to search for, of which two, possibly, three, are usable). [NOTE: This particular problem will no longer be a problem with the next release of EditTools.] In these instances, I generally ignore what the macro has come up with for the search and go to the Add terms dialog (see the image “Adding additional search terms” below) and paste the selected phrase into the first field and then break it up as I want, using the additional fields as described in the material following this image:

The problems commas, dashes, parens create

If the list is long and there are only a few items I want searched for, I click Deselect All (#3) and then check only the few terms I want used for the search. If the macro doesn’t list a variation that I want included, I click Add terms (#4) to bring up a dialog box in which I can add those missing variations:

Adding additional search terms

Adding additional search terms

I have decided that I want the whole search phrase searched for (I wouldn’t normally do this; I am doing just for demonstration here), so I pasted the selected phrase into the first empty field (#1 in image below). If I wanted to add a symbol, for example a Greek beta, I could click the * (#2) to bring up Word’s Symbol dialog; for an N or M dash, I could click the N- or M- button (#2). Once all the terms I want the macro to find are added here, I click OK (#3), which will take me back to the primary Find screen where I can see that the macro has added my search term as item 11 (arrow). If I am done, I can click OK (#4) to run the macro.

Find screen after adding a phrase

Find screen after adding a phrase

When the macro runs, it goes through the entire list of items to see what it can find. [IMPORTANT: The macro searches from the character immediately following the selection you made to wherever the end bookmark (remhigh) is located. The end bookmark is usually inserted automatically based on other choices you have made in EditTools, but it can also be added manually by you. If it needs to be added by you, when you run the macro, a message will popup telling you that the remhigh bookmark is missing and needs to be added. Although you can place the bookmark anywhere in your document, it is recommended that you place it at the end of the primary text and before any references, tables, or figure legends/figures.] The macro searches down from the point of the selection to the end of the main text of the document.

The idea is that because the macro only searches forward (down) (see the text at #1 in the image below), not all or backward (up), you use it at your first encounter with the acronym or phrase. Running the macro generates a report like the following:

ESCR's report

ESCR’s report

In our example, it only found two variations: Hb and HbA (#2). Based on this report, it is safe to conclude, for example, that hemoglobin is not used in the text after this point. Also, it is clear that Hb, regardless of how it is used, appears only four additional times in the text. You now have two options. First, if you do not want Hb to be used as a substitute for hemoglobin, you can enter hemoglobin in the empty field (#3). (Similarly, if the book style is for Hgb to be used instead of Hb, you can instruct ESCR to change each instance of Hb to Hgb.) This will instruct the macro to replace Hb with hemoglobin. Second, if using Hb and/or HbA is OK, you can check the highlight box (#4) so that the macro will highlight these terms throughout the manuscript. The highlight will indicate to you that (a) you have already done a search for the term and found that it appears enough times in the manuscript to be retained, (b) that the term has been previously defined so if you should see it spelled out again, you know to replace the spelled out version, and (c) that the term is correct (even if Word insists it is misspelled). The image below shows that I have decide to change the one instance of Hb to hemoglobin (#6) and that I want HbA highlighted (#7).

Instructing the ESCR macro

Instructing the ESCR macro

Clicking OK (#5 in ESCR’s report above) causes the macro to run and make the changes. As the following image shows, ESCR made the changes as instructed — and does so with tracking on (even if you have turned tracking off). HbA is now highlighted (#1 and #2) and Hb has been changed to hemoglobin (#3).

After running the ESCR macro

After running the ESCR macro

The ESCR macro is very useful in these circumstances. The two images below are from a document I edited recently. They are the first and last screen of the results of a search for Hb in a nearly 200-page chapter. The macro found 83 variations and you can see that some changes would be required. The advantage is that I can address all of these at one time, enabling me to make them uniform in presentation, and any changes are with tracking on so I can undo any erroneous changes. Word’s Find feature cannot do this as quickly and easily as ESCR (in fact, Word’s Find gave the “too many” message in this instance).

First screen of the Find results

First screen of the Find results

Last screen of the Find results

Last screen of the Find results

Working smarter is the a key to editing profitably. Making use of the right tool at the right time is one hallmark of a professional editor. Importantly, doing those things that help improve accuracy and consistency makes clients happy clients. EditTools is an important tool in the professional editor’s armory.

Richard Adin, An American Editor

Related An American Editor essays are:

____________

Looking for a Deal?

You can buy EditTools in a package with PerfectIt and Editor’s Toolkit at a special savings of $78 off the price if bought individually. To purchase the package at the special deal price, click Editor’s Toolkit Ultimate.

March 11, 2015

Thinking Fiction: The Style Sheets — Part III: Locations

The Style Sheets — Part III: Locations

by Amy J. Schneider

In any novel or short story, the characters move around the world they inhabit: within buildings and throughout neighborhoods, cities, and even sometimes spiritual realms. Let’s talk about how to keep that motion logical. Many of the general concepts discussed last month in “The Style Sheets — Part II: Characters” apply here as well, so you may wish to review that article as we go along.

Here We Go Again: Details, Details, Details!

Last month I talked about the proclivity of copyeditors to keep alphabetical lists of characters, and how doing so isn’t really all that helpful for maintaining continuity. And so it is with geographical details. Rather than simply listing all places alphabetically, it’s much more useful to group places by their relation to each other: a house with all its descriptive interior and exterior details, shops that are near each other, streets and how they are connected, and so on.

One exception I do make to the no-alphabetizing rule is that after the edit is done, I often have a list of minor features such as streets, rivers, or businesses that do not have any extra information associated with them. These I will alphabetize by category—all the streets, all the rivers, and so on—just for ease of finding them.

Keeping It Real — Or Not

Many novels are set in real locations, and for the most part you’ll need to make sure that the details given reflect reality. Have you ever read a novel set in a location with which you are well familiar and scoffed when a street ran in the wrong direction, or a building was miles away from its real location? (One of my favorite examples from the world of television was from the 1970s sitcom Happy Days; in one episode some characters walked from Milwaukee to Fond du Lac, Wisconsin, in about an hour. Anyone who’s driven from the area where I grew up to Milwaukee knows that it is indeed about an hour from Fond du Lac to Milwaukee — in a car.) When specifics are given, check them out. Get out your atlas or pull up an online map.

However, bear in mind also that authors often introduce deliberate fictionalization, much in the same way that phone numbers in movies and on television are often of the “555” variety. So when you find such errors, bring them to the author’s attention, but query whether the error is deliberate.

For completely fictional locations, such as fictional towns or fantasy worlds, you may find it helpful to draw maps to help you (and the characters) keep your bearings.

Details to Track

With locations, as with characters, track anything that could be contradicted later. If a character’s bedroom window faces west, we don’t want her awakened in the morning by the blinding sun. Don’t let a solid wooden door turn into steel. And so on. Let’s look at what sorts of things you might want to put on your style sheet:

  • Cardinal directions and distances (the mountains are west of town; the comedy club is a few miles from Benny’s apartment; the fictional town of Midvale is a 3-hour drive from St. Louis; the laundromat is at the southwest corner of the intersection)
    • This point also relates to the timeline, discussed in next month’s article: if the characters take a day to travel from point A to point B, but a week to return, it’s time to query. Either there’s a problem with the number of days that have passed during that return trip, or there should be a good reason for the delay.
    • Also watch for the relationships between locations; if a hotel is just outside the city limits, how can the bar across the street be 5 miles out? This is where grouping by location can help you catch inconsistencies.
  • Names of regions, cities/towns, streets, geographical features, businesses, buildings; any proper nouns (including real names that might be spelled different ways: Walmart, 7-Eleven)
  • Descriptions of interiors
    • Décor, colors of walls/furniture/drapery, furniture type and placement; locations of rooms, windows, and doors; other details (the house has only one bathroom; Betty’s house has a business landline)
    • Right/left: rooms off hallways, doors, wings of mansions; turns taken while walking/driving to get from point A to point B (if specifics are given) (the main staircase turns to the right; Robert’s office is on the left side of the hallway off the living room)
    • Where the sun rises and sets (remember our early riser!)
    • Number of floors in buildings, locations of rooms (watch out for British usage here; in British usage, the ground floor is at street level and the first floor is the next one up, whereas in American usage we start with the first floor)
    • Remember that if an apartment is on the fourth (American!) floor, you will climb only three flights of stairs to get to it.
  • Descriptions of exteriors: landscaping, architecture (the cemetery is not fenced; Lydia’s house has a flagstone path from the small front porch to the sidewalk; neat flowerboxes at every window)
  • Business hours and regular events (the gas station is open every day; if the book club always meets at Beans & Books on Tuesdays, then what are they doing there on a Saturday?)

Again, as for character details, you can simply copy descriptions from the manuscript to your style sheet to save time, and edit as desired to save space. Note the chapter number where the description first occurs.

I Found a Contradiction; Now What?

Again, refer back to “The Style Sheets — Part 2: Characters” for guidance on resolving discrepancies. If there is a minor difference, it’s probably safe to change and query. But if the problem involves a factor that’s critical to the plot, bring it to the author’s attention and suggest solutions if you can.

Remember that when you are wearing your copyeditor hat, you are like the continuity director for a movie. If the locations are meant to represent real locations, it’s your job to make sure they are accurate. If they are fictional (or fictionalized), make sure they stay true to themselves within that fictional world. Next month, I’ll talk about keeping an accurate timeline to ensure that the story does not breach the space-time continuum (unless it’s supposed to!).

Amy J. Schneider (amy@featherschneider.com), owner of Featherschneider Editorial Services, has been a freelance copyeditor and proofreader of fiction and nonfiction books since 1995. She has shared her insights on copyediting fiction as a speaker at the Communication Central conferences, in writing for the Copyediting newsletter, and in an audioconference for Copyediting.com. Amy can be reached at LinkedIn, via Twitter, and on Facebook.

Related An American Editor essays:

March 7, 2015

Worth Reading: The Value of Copyediting

Readers of An American Editor know that I believe editing enhances the value of an author’s work. I also believe that you pay more for professional, quality editing and that not everyone who claims to be an editor is (should be) an editor. I also firmly believe that there are professional editors and nonprofessional editors, and that it is professional editors who add value to an author’s writing.

Too often the response from a client or a potential client is that “readers do not care” about editing and especially do not care about whether any editing is professionally done. A study by Wayne State University Associate Professor Fred Vultee seems to draw a different conclusion. The study was previewed by Natalie Jomini Stroud in her March 3, 2015 article at the American Press Institute blog:

Study Shows the Value of Copy Editing.

Although not the original study article (“Audience Perceptions of Editing Quality” published January 6, 2015 in Digital Journalism), which is behind a paywall, Stroud’s article provides a clear summation of Vultee’s study. Of special interest is the chart comparing copyediting to no copyediting.

Other blogs that discuss Vultee’s study include Journalist’s Resource (“The value of editing in the digital age: Readers’ perceptions of article quality and professionalism“) and Craig Silverman at Poynter.org (“Study: Readers value extra editing, women especially“). For a PDF of Professor Vultee’s presentation on the subject to ACES in 2012, see “Readers Perceptions of Quality“.

Perhaps some of these findings will be helpful in convincing clients of the value of our services. Regardless, there is some interesting reading above and some food for deep thinking. Enjoy!

Richard Adin, An American Editor

 

March 4, 2015

The Business of Editing: Correcting “Errors”

In my previous two essays, “The Business of Editing: Wildcarding for Dollars” and “The Business of Editing: Journals, References, & Dollars“, I discussed two ways to improve efficiency and increase profitability by using macros. Today’s essay digresses and discusses correcting earlier-made errors.

I need to put errors between quote marks — “errors” — because I am using the term to encompass not only true errors but changes in editorial decisions, decisions that are not necessarily erroneous but that after reflection may not have been the best decision.

Once again, however, I am also talking about a tool available in EditTools: the Multifile Find in the Find & Replace Master macro. The F&R Master macro has two parts, as shown below: the Sequential F&R Active Doc and Multifile Find (to see an image in greater detail, click on the image to enlarge it):

Sequential F&R Manager

Sequential F&R Manager

 

Multifile Find Manager

Multifile Find Manager

Today’s discussion is focused on the Multifile Find macro, but the Sequential is worth a few words.

The Sequential F&R works on the active document. It is intended for those times when you know that you want to run a series of finds and replaces. If you are working on a book and it is evident that the author does certain things consistently that need changing, you can use this macro to put together several items that are to be changed sequentially and you can save the criteria so that you can reuse them again in the next document. I often find that, for example, authors use an underlined angle bracket rather than the symbol ≤ or ≥. I created a F&R for these items that I can run before editing a document to replace the underlined versions with the correct symbols.

For editorial “errors” I have made, however, it is the Multifile Find macro that is important.

As I have said many times, I tend to work on large documents. The documents tend to be multiauthored and each chapter is its own file. Sometimes I am able to work on chapters sequentially, but more often they come to me in haphazard order. Consequently, I have to make editorial decisions as I edit a chapter that may well affect earlier chapters that have yet to arrive. And it may be that if I had had the ability to edit the earlier-in-sequence chapter first, I would have made a different editorial decision.

For a recent example, consider “mixed lineage kinase.” My original decision was to leave it unhyphenated, but as I edited additional chapters my thoughts changed and I decided it really should be “mixed-lineage kinase.” But as is usual with these kinds of things, I had already edited another half dozen chapters when I changed my decision. In addition, by that time, I also had edited close to 40 chapters and I couldn’t remember in which chapters “mixed lineage” appeared.

The Ethical Questions First

The first questions to be dealt with are the ethical questions: First, is “mixed lineage kinase” so wrong that it can’t simply be left and future instances of “mixed-lineage” changed to the unhyphenated form? Second, if it needs to be changed to the hyphenated form, do I need to go back and change the incorrect versions or can I just notify the client and hope the proofreader will fix the problem? Third, if the future versions are to be hyphenated, can I just leave the unhyphenated versions and hope no one notices?

We each run our business differently, but number one on my list of good business practices is good ethics. In this case, the third option, to me, is wholly unacceptable. It is not even something I would contemplate except for purposes of this essay. A professional, ethical editor does not fail to accept responsibility for decisions she makes; he does not attempt to hide them. The decisions are faced squarely and honestly and dealt with, even if it means a future loss of business from the client.

The first and second options are less clear. In the first instance, I need to make an editorial decision and abide by it. Whether to hyphenate or not isn’t really an ethical question except to the extent that it forces me to decide whether to overtly or covertly make a change. The world will not crumble over the hyphenation issue. Hyphenation does make the phrase clearer (especially in context), so ultimately, I think the editorial decision has to fall on the side of hyphenation being “essential”; I cannot skirt my obligation to do the best editing job I can by omitting future hyphenation, which means I need to go back and fix my “errors.”

The crux of the ethical question is really the second option. This depends on circumstances. If, for example, I know that the earlier edited material has already been set in pages, it makes no sense to resend corrected files. A note to the client is needed. If they have yet to be set, then new files are the order of business plus advising the client. The key is the advising of the client and identifying where the errors occur. I think that is the ethical obligation: for the editor to identify to the client exactly where the errors are to be found so that they can easily be corrected and to provide new files at the client’s request.

Multifile Find and “Errors”

This is where Multifile Find (MFF) comes into play. MFF will search all the files in a folder for phrases and words. You can have it search for and find up to 10 items at a time and you can have it do one of two things: either it can find the wanted phrase and generate a report telling you where it is found and how many times it is found or it can find the phrase, pause to let you correct the phrase, and then find the next instance. I generally generate the report first. An example of a report for “mixed lineage” is shown here:

Mixed Lineage Report

Mixed Lineage Report

The report tells you name of the document in which the phrase is found, the page it is found on, and how many times it occurs on that page. With this report, you can manually open the named files, go to the appropriate page, and decide whether a particular occurrence needs to be corrected. If I am not sure whether the client can use corrected files, I send the client a copy of this report along with my mea culpa.

If I think the client might be able to use corrected files, I correct them and send the files, the report, and my mea culpa.

Multifile Find Update Files Option

If I know the client can use the corrected files because, for example, pages have not yet been set, I send the corrected files and an explanation of why I am sending revised files. But in this instance I use the MFF update option rather than generate report option:

Multifile Find Replace Option

Multifile Find Replace Option

The update option requires a few different steps than the generate report option. The biggest difference is that you need to save the find criteria for the update option; you do not need to do so for the generate report option.

I enter the find term in the first field (#1 in image above). I also need to check the Inc? (for Include?) box (#2). Only those terms listed that also are checked will be searched for. If I do not want the current active file also searched (assuming it is in the selected search directory), I check the box at #3, which is also where I select the search directory. Because I want to update the files, not generate a report, I check Update files (#4). I then Save my find criteria (#5).

The way the macro works, is that it will first search the files for the first listed find term. When that is done, it will proceed to the next listed term. As you can see, you can list up to 10 terms to sequentially find.

Finally,, I click Run (#6) and the macro will begin searching files in the selected directory until it comes to the first instance of the find term. When it finds a match it displays the following message:

Find Message

Find Message

In the file, it highlights the found term as shown here:

Highlighted Find Text

Highlighted Find Text

I can either insert my hyphen or click OK in the Find Message dialog to find the next instance. If I insert the hyphen in our example, I then need to click OK in the Find Message dialog to go to the next instance. When there are no more instances to be found in the particular file, a message asking if you want to save the changed file:

Save Changes?

Save Changes?

The macro then proceeds to the next file in which it finds the term and the process continues until the term is no longer found or you cancel the process.

Saving Time and Making Profit

Again, I think it is clear how the right macro can save an editor time and make editing more profitable. In my experience, it is the rare editor who doesn’t have a change of mind the further along she is in editing a project. I think it is a sign of a professional editor. But editing is a business and as a business it needs to make a profit. One way to do so is to minimize the time and effort needed to correct “errors” and to do so in a professional and ethical manner.

Over the years, I have found that using Multifile Find has not only enhanced my profitability, but it has enhanced my reputation as professional editor because my clients know that I am not only willing to recognize that I have made a mistake, but I am willing to correct it. One reason I am willing to correct a mistake is that it doesn’t take me hours to do so; I can do it efficiently with EditTools’ Multifile Find.

Richard Adin, An American Editor

____________

Looking for a Deal?

You can buy EditTools in a package with PerfectIt and Editor’s Toolkit at a special savings of $78 off the price if bought individually. To purchase the package at the special deal price, click Editor’s Toolkit Ultimate.

March 2, 2015

The Business of Editing: Journals, References, & Dollars

In The Business of Editing: Wildcarding for Dollars, I discussed wildcard macros and how they can increase both accuracy and profitability. Profitability is, in my business, a key motivator. Sure I want to be a recognized, excellent, highly skilled editor, an editor who turns ordinary prose into extraordinary prose, but I equally want to make a good living do so — I want my business to be profitable.

Consequently, as I have mentioned numerous times previously, I look for ways to make editing more efficient. The path to efficiency is strewn with missteps when editors think that all editing tasks can be made more efficient; they cannot. But there are tasks that scream for efficiency. Wildcard macros are one method and work very well for the tasks for which they are suited. A second method, which deals with references, is the EditTools Journals macro.

As I relayed in previous articles, I work on very long documents that often have thousands of references. My current project runs 137 chapters, approximately 12,000 manuscript pages, with each chapter having its own list of references, ranging in length from less than 100 to more than 600 references. And as is true of the text of the chapters, the condition of the references varies chapter by chapter. The goal, of course, is for all of the references to be similarly styled. as well as to be accurate.

The first image shows a sample of how journal names were provided in one chapter. The second image shows how the names need to end up.

Journals in original

Journals in original

 

How the journals need to be

How the journals need to be

The question is how do I get from before to after most efficiently? The answer is the Journals macro.

The key to the Journals macro is the Journals dataset. In my case, I need journal names to conform to the PubMed style. However, I could just as easily create a dataset for Chicago/MLA style (American Journal of Sociology), CSE (Cell Biochem Funct.), APA (Journal of Oral Communication,), AAA (Current Anthropology), or any other style. The image below shows the Journals Manager with my PubMed dataset open. The purple arrow shows a journal name as provided by an author; the blue arrow shows the correct PubMed name of the journal, that is, to what the macro will change the wrong form.

PubMed dataset in Journals Manager

PubMed dataset in Journals Manager

The next image shows a sample APA-style dataset. The red arrow shows the abbreviated version of the journal name and the green arrow shows the full name to which it will be converted by the macro.

APA style in Journals Manager

APA style in Journals Manager

As I stated, nearly all my work requires PubMed styling so my PubMed dataset is by far the largest. If you look at the PubMed dataset image above, you will see that as of this writing, the dataset contains more than 64,000 journal name variations. “Variations” is the keyword. Authors give journal names in all kinds of style, so to cover the possibilities, a single journal may have two dozen entries in the dataset.

The key to creating the dataset is to make use of the Journal Manager — and to keep adding new variations and journals as you come across them: Spend a little time now to make more money every future day. The images of the Manager shown above show you the primary interface. The problem is that it would take an inordinate amount of time to add each possible variation individually. The smarter method is to use the Multiple Entries screen, as shown here:

Journals Manager Multiple Entry dialog

Journals Manager Multiple Entry dialog

With the Multiple Entry dialog open, you enter a variation in the #1 field. By default, all of the trailing punctuation is selected (#2), but you could choose among them by deselecting the ones you didn’t want. For example, if the style you work in requires that a journal name be followed by a comma, you might want to deselect the comma here because this is the list of “wrong” styles and having a trailing comma would not be “wrong.” Clicking Add (#3) adds whatever you have typed in #1 to the main screen (#4) along with the selected trailing punctuation. In the example, I entered N Engl J Med once in #1, left the default selection in #2, clicked Add (#3), and had five variations added to the main field (#4) — I did not have to type N Engl J Med five times, just the once.

I then repeated the process for N. Engl. J. Med. (#4) and am prepared to repeat it for New Engl J Med. (#1). I will repeat the process for a variety of variations in an attempt to “kill” multiple possibilities at one time. When I am done, I will click OK (#5), which will take me back to the main Manager screen, shown here:

Journals Manager AFTER Multiple Entry

Journals Manager AFTER Multiple Entry

The main Manager screen — after using the multiple entry dialog — shows in faint lettering “Use ‘Multiple Entries’ button to adjust” in the Add Journal field (#1). This means two things: First, it tells you that there are journal variations waiting to be added to the dataset, and second, that if you want to modify the list of waiting names, either by adding or deleting, click the Multiple Entries button to bring the dialog back up for editing. If you are ready to add to the dataset, the next step is to tell the macro to what the “wrong” versions should be corrected. This is done by typing the correct form in the Always correct journal field (#2).

If your style was to add a comma after the correct form, you could enter the correct name trailed by a comma here. In the example I show, you would just add the comma after Med. But that might not be the best way to do it because you then lose the ability to use the dataset for a style that is identical but that doesn’t use the comma. There is an alternative, which we will get to. What is necessary, however, is that the correct form be entered here so the macro knows what to do. After entering the correct form (#2), click Add (#3) to add all of the variations and the correct form to the dataset.

The macro will not add duplicate entries so no need to worry about having an entry appear multiple times in the dataset. The macro automatically checks for duplicates. When you are done adding for this session, click Save & Close. (Tip: If you plan to add a lot of entries in one sitting, every so often click Save. That will save the dataset with the newest entries and let you continue to add more. Until Save or Save & Close is clicked, any entries are not permanently part of the dataset.)

Once you have your dataset, you are ready to unleash the Journals macro. It is always a good idea to put the reference list in a separate file before running the macro, but that can’t always be done. Separating the references into their own file helps speed the macro.

When ready to run the macro, click Journals (red arrow below) on the EditTools Tab.

EditTools Tab

EditTools Tab

Clicking Journals brings up this dialog with options:

Journals Macro Options

Journals Macro Options

Here is the best place to select trailing punctuation you want added to the correct journal name. Clicking on the dropdown (#1) will give you the choice of comma, period, semicolon, colon, or the default “none.” If you choose, for example, semicolon, every time a journal name is corrected, it will be followed by a semicolon. Note, however, that if the journal name is correct already except that it doesn’t have the trailing punctuation, the punctuation will not be added. In other words, New Engl J Med will be corrected to N Engl J Med; but N Engl J Med will be left as it is. In this instance, using the other system (adding the punctuation to the correct name in the dataset) will work better.

If your manuscript has endnotes or footnotes with references, clicking #2 will instruct the macro to search those items as well. You can also tell the macro to make the journal names italic, nonitalic, or as they currently are. In this instance, the macro will only change those journal names it highlights. For example, if it doesn’t change/highlight N Engl J Med because it is not in the dataset, it will not change the text attribute of it either.

Clicking #4 lets you change the dataset file to be used by the macro and #5 starts the macro running.

The results of running of the Journals macro depends on your dataset. Clearly, the larger your dataset (i.e., the more journals and variations it contains), the greater impact the macro will have on your reference list. The following image shows the results of running the Journals macro. Journals macro makes use of track changes and color highlighting. As the first instance (#1) shows, the incorrect journal name, Am. J. Kidney Dis. Off. J. Natl. Kidney Found., was corrected to Am J Kidney Dis and highlighted in cyan. The cyan tells me that the name is now correct. Note that the change was made with tracking on, which gives me the opportunity to reject the change. The green highlight (#2) tells me that the journal name Pharmacotherapy was correct as originally provided. And #3 tells me that this journal name variation is not found in my dataset. At this juncture, I would look up the journal in PubMed Journals, open the Journal Manager, and add the variation other needed variations of the name to the dataset so that next time it will be found and corrected.

Results of Running the Journals Macro

Results of Running the Journals Macro

I know this seems like a lot of work, and it is when you are starting out to build the dataset. But as your dataset grows, so do your profits. Consider this: If the reference list you need to check is 100 entries, how long does it take you to check each one manually? I recently checked a reference list of 435 entries. The author names were done incorrectly (see The Business of Editing: Wildcarding for Dollars for examples) and the year-volume-pages portion of the references were also in incorrect order. Most — not all — of those errors I was able to correct in less than 10 minutes using wildcarding. That left the journal names.

Nearly every journal name was incorrectly done. With my large dataset (over 64,000 variations), it took the Journals macro 32 minutes to correct the journal names. (Nine entries were not journals and so were not in the dataset and seven incorrect journal names were not in the dataset and had to be added afterward.) I still had to go through each entry in the reference list, but to complete a review of the reference list and make any additional corrections that were needed took me an additional 2 hours and 10 minutes. In other words, I was able to completely edit a 435-entry reference list, fixing all of the formatting problems and incorrect journal names, in less than 3 hours.

How quickly could you have done the same?

Combining macros is a key to efficiency. Recognizing that a problem has a macro solution and then knowing how to impose that solution can be the difference between profit and no profit. Using macros wisely can add fun and profit to the profession of editing.

Richard Adin, An American Editor

____________

Looking for a Deal?

You can buy EditTools in a package with PerfectIt and Editor’s Toolkit at a special savings of $78 off the price if bought individually. To purchase the package at the special deal price, click Editor’s Toolkit Ultimate.

February 18, 2015

The Business of Editing: Wildcarding for Dollars

Freelancers often lack mastery of tools that are available to us. This is especially true of wildcarding. This lack of mastery results in our either not using the tools at all or using them to less than their full potential. These are tools that could save us time, increase accuracy, and, most importantly, make us money. Although we have discussed wildcard macros before (see, e.g., The Only Thing We Have to Fear: Wildcard Macros, The Business of Editing: Wildcard Macros and Money, and Macro Power: Wildcard Find & Replace; also see the various Lyonizing Word articles), after recent conversations with colleagues, I think it is time to revisit wildcarding.

Although wildcards can be used for many things, the best examples of their power, I think, are references. And that is what we will use here. But remember this: I am showing you one example out of a universe of examples. Just because you do not face the particular problem used here to illustrate wildcarding does not mean wildcarding is not usable by you. If you edit, you can use wildcarding.

Identifying the What Needs to Be Wildcarded

We begin by identifying what needs a wildcard solution. The image below shows the first 3 references in a received references file. This was a short references file (relatively speaking; I commonly receive references files with 500 to 1,000 references), only 104 entries, but all done in this fashion.

references as received

references as received

The problems are marked (in this essay, numbers in parens correspond to numbers in the images): (1) refers to the author names and the inclusion of punctuation; (2) shows the nonitalic journal name followed by punctuation; and (3) shows the use of and in the author names. The following image shows what my client wants the references to look like.

references after wildcarding

references after wildcarding

Compare the numbered items in the two images: (1) the excess punctuation is gone; (2) the journal title is italicized and punctuation free; and (3) the and is gone.

It is true that I could have fixed each reference manually, one-by-one, and taken a lot of time to do so. Even if I were being paid by the hour (which I’m not; I prefer per-page or project fees), would I want to make these corrections manually? I wouldn’t. Not only is it tedious, mind-numbing work, but it doesn’t meet my definition of what constitutes editing. Yes, it is part of the editing job, but I like to think that removing punctuation doesn’t reflect my skills as a wordsmith and isn’t the skill for which I was hired.

I will admit that in the past, in the normal course, if the reference list were only 20 items long, I would have done the job manually. But that was before EditTools and its Wildcard macro, which enables me to write the wildcard string once and then save it so I can reuse it without rewriting it in the future. In other words, I can invest time and effort now and get a reoccurring return on that investment for years to come. A no-brainer investment in the business world.

The Wildcard Find

CAUTION: Wildcard macros are very powerful. Consequently, it is recommended that you have a backup copy of your document that reflects the state of the document before running wildcard macros as a just-in-case option. If using wildcard macros on a portion of a document that can be temporarily moved to its own document, it is recommended that you move the material. Whenever using any macro, use caution.

Clicking Wildcard in EditTools brings up the dialog shown below, which gives you options. If you manually create Find and Replace strings, you can save them to a wildcard dataset (1) for future recall and reuse. If you already have strings that might work, you can retrieve them (2) from an existing wildcard dataset. And if you have taken the next step with Wildcards in EditTools and created a script, you can retrieve the script (3) and run it. (A script is simply a master macro that includes more than 1 string. Instead of retrieving and running each string individually, you retrieve a script that contains multiple strings and run the script. The script will go through each string it contains automatically in the order you have entered the strings.)

Wildcard Interface

Wildcard Interface

As an example, if I click Retrieve from WFR dataset (#2 above), the dialog shown below opens. In this instance, I have already created several strings (1) and I can choose which string I want to run from the dropdown. Although you can’t see it, this particular dataset has 40 strings from which I can choose. After choosing the string I want to run, it appears in the Criteria screens (2 and 3), divided into the Find portion of the string and the Replace portion. I can then either Select (4) the strings to be placed in primary dialog box (see Wildcard Interface above) or I can Edit (5) the strings if they need a bit of tweaking.

Wildcard Dataset Dialog

Wildcard Dataset Dialog

If I click Select (4 above), the strings appear in the primary Wildcard dialog as shown below (1 and 2). Because it can be hard to visualize what the strings really look like when each part is separated, you can see the strings as they will appear to Microsoft Word (3). In addition, you know which string you chose because it is identified above the criteria fields (purple arrow). Now you have choices to make. You can choose to run a Test to be sure the criteria work as expected (4), or if you know the criteria work, as would be true here, you can choose to Find and Replace one at a time or Replace All (5).

The Effect of Clicking Select

The Effect of Clicking Select

I know that many readers are saying to themselves, “All well and good but I don’t know how to write the strings, so the capability of saving and retrieving the strings isn’t of much use to me.” Even if you have never written a wildcard string before, you can do so quickly and easily with EditTools.

Creating Our String

Let’s begin with the first reference shown in the References as Received image above. We need to tackle this item by item. Here is what the author names look like as received:

Kondo, M., Wagers, A. J., Manz, M. G., Prohaska, S. S., Scherer, D. C., Beilhack, G. F. et al.:

What we have for the first name in the list is

[MIXED case multiletter surname][comma][space][single UPPERCASE letter][period][comma]

which makes up a unit. That is, a unit is the group of items that need to be addressed as a single entity. In this example, each complete author name will constitute a unit.

This first unit has 6 parts to it (1 part = 1 bracketed item) and we have identified what each part is (e.g., [MIXED case multiletter surname]). To find that first part we go to the Wildcard dialog, shown below, click the * (1) next to the blank field in line 1. Clicking the * brings up the Select Wildcard menu (2) from which we choose we choose Character Menu (3). In the Character Menu we choose Mixed Case (4) because that is the first part of the unit that we need to find.

Wildcard First Steps

Wildcard First Steps

When we choose Mixed Case (4 above), the Quantity dialog below appears. Here you tell the macro whether there is a limit to the number of characters that fit the description for this part. Because we are dealing with names, just leave the default of no limit. However, if we knew we only wanted names that were, for example, 5 letters or fewer in length, we would decheck No Limit and change the number in the Maximum field to 5.

How many letters?

How many letters?

Clicking OK in the Quantity results in entry of the first portion of our string in the Wildcard dialog (1, below). This tells the macro to find any grouping of letters — ABCd, Abcde, bCdaefTg, Ab, etc. — of any length, from 1 letter to 100 or more letters. Thus we have the criteria for the first part of our Find unit even though we did not know how to write wildcardese. In the dialog, you can see how the portion of the string really looks to Microsoft Word (2) and how, if you were to manually write this part using Microsoft Word’s Find & Replace, it would need to be written.

How this part looks in wildcardese

How this part looks in wildcardese

The next step is to address the next part, which can be either [comma] alone or [comma][space]. What we need to be careful about is that we remember that we will need the [space] in the Replace string. If we do [comma][space] and if we do not have just a [space] entry, we will need to provide it. For this example, I will combine them.

Because these are simple things, I enter the [comma][space] directly in the dialog as shown below. With my cursor in the second blank field (1), I simply type a comma and hit the spacebar. You can verify this by looking below in the Find line of wildcardese (2), where you can see (, ):

Manually adding the next part

Manually adding the next part

The remaining parts to do are [single UPPERCASE letter][period][comma]. They would be done using the same techniques as the prior parts. Again, we would have to decide whether the [period] and [comma] need to go on separate lines or together on a single line. Why? Because we want to eliminate the [period] but keep the [comma]. If they are done together as we did [comma][space], we will manually enter the [comma] in the Replace.

For the [single UPPERCASE letter], we would follow the steps in Wildcard First Steps above except that instead of Mixed Case, we would select UPPER CASE, as shown here:

Selecting UPPER CASE from the Characters Menu

Selecting UPPER CASE from the Characters Menu

This brings up the Quantity dialog where we decheck No Limit and, because we know it is a single letter we want found, use the default Minimum 1 and Maximum 1, as shown here:

A Quantity of 1

A Quantity of 1

Clicking OK takes us to the main Wildcard dialog where the criteria to find the [single UPPERCASE letter] has been entered (1, below). Looking at the image below, you can see it in the string (2). For convenience, the image also shows that I manually entered the [period][comma] on line 4 (3 and 4).

The rest of the Find criteria

The rest of the Find criteria

The Wildcard Replace

The next step is to create the Replace part of the string. Once again, we need to analyze our Find criteria.

We have divided the Find criteria into these 4 parts, which together make up the Find portion of the string:

  1. [MIXED case multiletter surname]
  2. [comma][space]
  3. [single UPPERCASE letter]
  4. [period][comma]

The numbers represent the numbers of the fields that are found in the primary dialog shown above (The Rest of the Find Criteria). What we need to do is determine which fields we want to replace and in what order. In this example, what we want to do is remove unneeded punctuation, so the Replace order is the same as the Find order. We want to end up with this:

  1. [MIXED case multiletter surname]
  2. [space]
  3. [single UPPERCASE letter]
  4. [comma]

The way we do so is by filling in the Replace fields. The [space] and the [comma] we can enter manually. You can either enter every item manually or you can let the macro give you a hand. Next to each field in the Replace column is an *. Clicking on the * brings up the Select Wildcard dialog:

Select Wildcard

Select Wildcard

Because what we need is available in the Find Criteria, we click on Find Criteria. However, the Select Wildcard dialog also gives us options to insert other items that aren’t so easy to write in wildcardese, such as a symbol. When we click Find Criteria, the Use Find Criteria dialog, shown below, appears. It lists everything that is found in the Find criteria by line.

Use Find Criteria dialog

Use Find Criteria dialog

Double-clicking the first entry (yellow highlighted) places it in the first line of the Replace, but by a shortcut — \1 — as shown in the image below (1). If we wanted to reverse the order (i.e., instead of ending up with Kondo M, we want to end up with M Kondo,), we would select the line 3 entry in the Use Find Criteria Dialog above, and double-click it. Then \3 would appear in the first line of Replace instead of \1.

The completed wildcard macro

The completed wildcard macro

For convenience, I have filled the Replace criteria (1-4) as The Completed Wildcard Macro image above shows. The [space] (2) and the [comma] (4) I entered manually using the keyboard. The completed Replace portion of the string can be seen at (5).

The next decision to be made is how to run the string — TEST (6) or manual Find/Replace (7) or auto Replace All (8). If you have not previously tried the string or have any doubts, use the TEST (6). It lets you test and undo; just follow the instructions that appear. Otherwise, I recommend doing a manual Find and Replace (7) at least one time so you can be certain the string works as you intend. If it does work as intended, click Replace All (8).

You will be asked whether you want to save your criteria; you can preempt being asked by clicking Add to WFR dataset (9). You can either save to an existing dataset or create a new dataset. And if you look at the Wildcard Dataset dialog above (near the beginning of this essay), you will see that you can not only name the string you are saving, but you can provide both a short and a detailed description to act as reminders the next time you are looking for a string to accomplish a task.

Spend a Little Time Now, Save Lots of Time Later

Running the string we created using Replace All on the file we started with, will result in every instance of text that meets the Find criteria being replaced. I grant that the time you spend to create the string and test it will take longer than the second and subsequent times you retrieve the string and run it, but that is the idea: spend a little time now to save lots of time later.

I can tell you from the project I am working on now that wildcarding has saved me more than 30 hours of toiling so far. I have already had several chapters with 400 or more references that were similar to the example above (and a couple that were even worse). Wildcarding let me clean up author names, as here, and let me change cites from 1988;52(11):343-45 to 52:343, 1988 in minutes.

As you can see from this exercise, wildcarding need not be difficult. Whether you are an experienced wildcarder or new to wildcarding, you can harness the power of wildcarding using EditTools’ WildCard Find & Replace. Let EditTools’ WildCard Find & Replace macro system help you. Combine wildcarding with EditTools’ Journals macro and references become quicker and easier.

Richard Adin, An American Editor

Looking for a Deal?

You can buy EditTools in a package with PerfectIt and Editor’s Toolkit at a special savings of $78 off the price if bought individually. To purchase the package at the special deal price, click Editor’s Toolkit Ultimate.

February 4, 2015

Should Editors Eat Donuts?

Filed under: A Humor Interlude,Editorial Matters — americaneditor @ 4:00 am
Tags: , , ,

Not too far from where I live is a Dunkin’ Donuts. I rarely eat donuts these days. I find them too sweet and, more importantly, too fattening. Needless to say, like many Americans who work at sedentary jobs, weight is a problem. I’m a bit more than a tad overweight.

Regardless, there are times when I crave donuts — maybe twice a year. My wife takes the position that I eat them so rarely, if I have a few, it will do me no harm. Of course, that is the argument I give whenever I want to eat something that I know isn’t good for me.

The problem is compounded by the donuts being sugar coated — take a bite and the sugar is everywhere — over the keyboard, on my fingers, on my shirt, even on my black cat! Which brings back memories.

I wish I could say the memories are fond memories, but they aren’t. They are memories of when I first started freelancing and worked on paper. What a nightmare the combination of coffee, donuts, and paper was. Especially the jelly donuts that had a tendency to leak out the opposite side from that I was biting. I still remember trying to catch the jelly before it hit the manuscript only to also knock over my coffee cup (that was in the days when I drank coffee, before I wised up and began drinking only green tea).

There it was — manuscript that I had to return to the client quickly coated with a brown stain topped with red jelly and a dissolved sugar coating that could no longer be simply brushed off. And because the manuscript was in a pile, the brown stain was rapidly making its way through the stack.

As you probably have guessed, I never did get repeat work from that client. But I did learn a valuable lesson: donuts and editing didn’t mix when editing on paper.

I rapidly transitioned from editing on paper to editing onscreen. In fact, the transition was complete within 6 months of my beginning freelancing. I admit that fear of recurring jelly catastrophes was one motivating factor to my transition to onscreen editing only, although the major reason was that in those days very few editors were willing to edit onscreen and my doing so enabled me to get more work and charge more, something I considered a winning combination.

But the transition didn’t solve the problem. In those days, too, I was in great physical shape and didn’t worry about extra pounds (30 years later, I worry about everything I eat), so I could indulge my donut cravings with abandon. But the jelly donuts didn’t cooperate. I’d bite in one and out would come the jelly from the opposite end — kerplunk on my keyboard. It didn’t take long for the jelly to become like a superglue and keep keys from working.

Cleaning those old keyboards was virtually impossible so I did the next best thing: I bought a large quantity of keyboards at wholesale so I could junk a sticky keyboard. Given the choice, I’d scrap the keyboard before I’d give up my donuts. Of course, coffee and soda spills also occasionally happened, but at least no manuscripts were getting destroyed — just keyboards.

I finally had to rethink my priorities when the manufacturer of my favorite keyboards went out of business (Do you remember Gateway Computer’s programmable keyboards?) and I couldn’t buy any more of them. Sure there were other keyboards but these were special because they were programmable.

Eventually I gave up eating donuts while working. I still like a cup of tea, but I haven’t spilled any tea on my keyboard in a couple of decades. But then came last week and I had that craving for a donut. Should I give in to it? Should I just take a break from work and eat it away from my keyboard? Should I gamble and continue working while enjoying a donut or two?

My questions got me thinking about the overriding question: Should editors eat donuts? Let’s face it, donuts aren’t particularly healthy, especially not for an old-timer like me. And no matter how carefully one eats a donut, some of it will end up on the keyboard. Besides eating while working is a distraction. Concentration is broken with each bite and it is so very easy to put sticky fingers on the keys.

After due consideration, I have come to the conclusion that an editor like me should not eat donuts while working, and probably not at all. There just isn’t an upside other than the fleeting sensation of sensory satisfaction. Having thought that, however, didn’t stop me from lusting after a sweet. (I think it was just an excuse to stop reading about hematology!) So I decided to poll the family and see what advice the family would give.

Alas, the “family” these days is my wife, my dog, and my cat. My dog almost never speaks and I haven’t got telepathy mastered, so she didn’t give an opinion. My cat was sleeping and protested at being disturbed. No advice there. That left my wife. In her case, action spoke louder than words — she gave me a donut.

I took the hint, ate the donut, drank some tea, lamented having eaten the donut as it repeated for what seemed an eternity, and swore an oath that there would be no donuts in my future. And to demonstrate the firmness of my decision, I threw away the coupon for free donuts.

I wonder how long my resolve will last.

Richard Adin, An American Editor

February 2, 2015

The Business of Editing: The Crystal Ball Says…

Readers of An American Editor know that one of the tasks I believe an editor has to do — preferably continuously, but at least yearly — is try to determine future trends that might affect their business. This is not easy to do, but it is necessary for a successful future business. Every time I urge prognostication I am asked how to do it and what trends I foresee.

My answer to what trends I foresee has been no answer at all. The reason is that what is a trend for me is not a trend for you. Our businesses, our plans for the future are not the same. What is important to my future business is different from what will be important to your future business.

My answer to how to prognosticate has been vague. The bottom line really is that there is no single, scientific way to prognosticate because there are so many factors involved. But I am going to attempt to illustrate one method and I am going to identify a trend I see for books, especially ebooks.

One thing I have discovered in recent years about colleagues is that many have very narrow reading habits. Surprising to me, some colleagues only read the material they are working on; they do no “outside” reading, preferring to watch television or do other things. Other colleagues do read but either not much or within very narrow confines, generally for amusement rather than for education.

Trend prognostication requires broader reading habits. It is not enough, for example, to read only romance novels when most of your editing is geology journals. Narrow reading is not good for many reasons, including because it limits the scope of your knowledge base expansion. We all have limited reading ranges — because of the sheer volume of material that is available. I struggle to keep up with the books I buy (see the series “On Today’s Bookshelf” for some of the titles I acquire) because I spend a significant amount of time trying to keep up with the periodicals I subscribe to. But between the books and periodicals I read, I get a broader knowledge base from which to discern trends that will affect my business.

“We Know How You Feel”

A good example of an article that triggered future thinking (and the foundation for this essay) is “We Know How You Feel” by Raffi Khatchadourian, which appeared in the January 19, 2015 issue of The New Yorker (pp. 50–59).

The article is a discussion of the current state (and future expectations) for computers to be able to “read” emotions. The idea is not new and has been worked on for decades, but it is in recent years that great strides have been made. Software now can determine whether your facial expression is one of anger or confusion or some other emotion — with 90% accuracy. What the software can do is simply amazing; what is expected in the not-too-distant future is Orwellian.

I read the article and was amazed, but then I began thinking about whether and how this will impact my work. I grant that I am looking a decade down the road, perhaps more, but then the way some companies move, perhaps not. What I ultimately want is to determine how I can position myself so clients need to come to me to take advantage of skills that perhaps only I will have at the beginning of the trend. I want to be able to command and control the market for editorial services in this up-and-coming field.

I hear you asking “What up-and-coming field?” “How can this possibly relate to manuscripts?”

A Future Trend?

Think about how books are bought today and who buys them. (This analysis can be applied to anything with a manuscript; I am using books to encompass all.) In addition to the consumer who buys a book to read, publishers buy books to publish. When a publisher “buys” a book, it does so through an advance. Whenever we buy a book, we gamble that the book will be to our liking or, in the publisher’s case, that it will be a bestseller. The emotion-reading chip of the future could remove that gamble.

The first thing I see is the software being embedded in ebook reading programs and devices. In the case where we download a reading application to our tablet, it will be the tablet that will come with the emotion-detecting software and the downloaded app will link to it. Emotion-detecting software can collect all kinds of data about reader like and dislikes and transmit it to the publisher. Imagine learning that fewer than 25% of the purchasers of a particular book actually read more than 20% of the book and that the reason why is they find it confusing. Perhaps the publisher will rethink publishing the second book in the series or, more likely, will take that information and help the author rework the second book to make it a better seller.

The second thing I see is that the emotion-detecting software will change the way books are sold to consumers. Today we pay in advance; with this software perhaps we will pay only if we like the book or read a certain amount of the book. In other words, all books will be free initially with payment based on liking and amount read. In other words, books will come with an enjoyment guarantee.

The third thing I see — and the most important — is the change in how books are written and the role of the editor in the creation process. I see books being rewritten based on objective reader responses. Today we rely on beta readers telling us what they think about a book. But beta readers miss many clues that only can be picked up via trained observers. For example, a beta reader may well like a book but not realize (or remember) that while reading chapter 4 she was confused or turned off by the characterizations or was very (dis)pleased with an exchange between characters. Or that the author tends to meander, which makes the reader yawn and wonder if the author will ever get back on track.

In other words, emotion-detecting software can make authors and editors more knowledgeable about what is right and what is wrong with a manuscript. Are readers turned off by character names? Are they okay but not happy with the lead character being a grammar school dropout? Do they like the story better when the child is 10 years old rather than 12 years old? Do readers become frustrated every time a particular minor character appears and then become happy when he leaves the storyline? Are readers frustrated by the never-ending acronyms or localisms? How quickly do they tire of the constant, repetitive swear language?

When we use beta readers today, we usually use people who are familiar with the genre. For example, if we are writing a space opera, we tend to find beta readers who are space opera fans. But what can that beta reader tell us about how readers of paranormal or fantasy or steampunk fiction will react to the book? More importantly, if you get a paranormal reader as a beta reader, how valuable is their feedback (today) in determining what will and will not appeal to other paranormal readers?

It is not that beta readers today are not useful; they are very useful. It is that emotion-detecting software can catch all the emotional nuances — the ups and downs, the hates and loves, the likes and dislikes — that we express unconsciously. Instead of “The book reads okay but I do not find the characters interesting,” emotion-detecting software could tell us which characters fit that description, which gave a glimmer of interest, and which were very interesting, thereby enabling an author to rework the manuscript appropriately.

The Editor Who…

The editor who is familiar with emotion-detecting software will be able to better guide an author. The editor will be able to interpret the results, and to discover the writing techniques the author uses that readers like and dislike. (Does, for example, the repetitive use of “further” to begin a sentence annoy readers or do they not care? Or do readers smile at certain character names but frown at others? Is a reader’s reaction to a character related to the character regardless of the character’s name or to the character’s name? Do the readers who read the version of the manuscript that sets the action in Berlin like the book better than those reading manuscript where the action occurs in Cairo? Or vice versa? How are readers reacting to various sections of dialogue? Do readers find the characterizations or the storyline unbelievable? Is it likely that readers will give positive word-of-mouth feedback to fellow readers?)

The editor who can offer such a service first will be able to command higher prices and a unique service. It is like when a few editors, in the days when paper editing was dominant, were able to show publishers how to save money by editing on a computer even though such editors expected to be paid more than other editors. The early-adopting editors had a head start that was difficult for other editors to overcome, especially those editors who resisted the transition.

Emotion-detecting software has the potential to revolutionize the publishing industry, just like the advent of ebooks did and the transition to editing on computers. The question is, will you spot the trend and leap on it? Perhaps today you can only follow progress, but that is what trend-spotting is about: identifying those happenings that need to be followed closely so you can grab the opportunity as soon as possible.

Imagine being the only editor who offers indie authors a way to exponentially increase the likelihood of success. That is what prognostication is all about.

Richard Adin, An American Editor

January 28, 2015

Lyonizing Word: The Right Tool for the Job

The Right Tool for the Job

by Jack Lyon

The sardine fork. The oyster ladle. The cake breaker. The butter pick. Those persnickety Victorians had a utensil for everything! You’ll find some interesting examples here:

Was all of that really necessary? I still eat the occasional sardine, and an ordinary table fork gets the job done. But I’m willing to bet that if I ever tried an actual sardine fork, I’d immediately realize the advantages of doing so. If I ever needed to ladle out oysters, I’ll bet an oyster ladle would be the perfect tool for the job.

The Wrong Tools

Every editor I know uses Microsoft Word. It’s the standard solution, the default program, the accepted tool for word processing. But is it the best tool for editing? Out of the box, it’s not. It has too many features that editors don’t need, and they’re always getting in the way.

When you’re editing, how often do you use SmartArt? How about WordArt? Page color? No? Then why not get rid of them? Why not turn Word into a lean, mean, editing machine? You can do this by customizing Word’s Ribbon. To do so, click File > Options > Customize Ribbon.

On the right side of your screen, you’ll now see a list of the Ribbon tabs and groups, like this:

Jack Lyon Graphic 1

Notice that I’ve unchecked the “Mailings tab.” I don’t want it showing because it’s something I never use. (Note: If you use macros, you should probably keep the “Developer” tab; it allows access to those macros and also allows you to load various document templates that may include macros.)

Now see that dropdown list at the top of the window? The one that says “Main Tabs”? Click it and select “All Tabs.” Now you’ll have many more options to uncheck:

Jack Lyon Graphic 2

Do you really need Chart Tools? Drawing Tools? Picture Tools? If not, make them go away. (Don’t worry—if necessary, you can always get them back again.)

So far, we’ve been removing whole groups of features at once, but you can also remove individual items from a group—if they are items you’ve previously added. Unfortunately Microsoft won’t let you remove the individual default features they think you need to have.

The Right Tools

The other problem with Microsoft Word is that it doesn’t have enough of the tools that editors need—at least not by default. Here again, the solution is to customize the Ribbon. Again, click File > Options > Customize Ribbon. This time, look at the window on the left. In the top dropdown box, select “Commands Not in the Ribbon.” Very interesting!

Jack Lyon Graphic 3

These are Word’s “hidden” commands, the features I encouraged you to explore in my previous article “Let’s Go Spelunking!”

Using the buttons in the window, you can add these features to the groups of your choice on Word’s Ribbon. You can even add your own custom tabs and groups by clicking the buttons labeled “New Tab” and “New Group.” How about adding a tab called something like “Editing Tools,” with all of the features you need for editing? If you’re also a writer, you could add a tab called “Writing Tools.” Some of the features would be different; some of the features would be the same. There’s nothing wrong with having certain features duplicated between tabs or groups, if that makes your work easier.

You can select other features by clicking the dropdown list and selecting “All Commands.” You can even select macros and add them to the Ribbon.

Add-In Tools

Unfortunately, even with the wealth of features that Word provides, there are other editing tools that Word doesn’t provide. For example, how often do you need to transpose two words? Two characters? How much time do you spend lowercasing articles and prepositions in titles? How often do you have to reach for the mouse in order to apply a style?

This is where add-in programs come in. “What’s an add-in program?” you ask. An add-in program is a Microsoft Word template that includes custom macros, Ribbon items, and keyboard shortcuts created specifically for a particular task—kind of like those Victorian utensils. As the name suggests, an add-in isn’t an independent piece of software; it actually works inside Microsoft Word, adding new features that then seem to be an integral part of Word. This isn’t some kind of hack, by the way; Microsoft Word was designed to support such add-ins, which is what makes them possible.

I’m partial to my own add-ins, of course, the ones I sell on the Editorium website. I’m really an editor, not a programmer, and I created these add-ins to make my own work easier. But I think you might like them too.

One of my favorites is the “Cap Title Case” feature in Editor’s ToolKit. When I’m working on a manuscript and come across a title like “The Ghost In The Machine,” or worse, “THE GHOST IN THE MACHINE,” I select the title and press the F5 function key (which activates the “Cap Title Case” feature). Like magic, the title is now capped like this: “The Ghost in the Machine.”

If I want to transpose two words, I put my cursor anywhere in the second word and press the F11 key. To transpose two characters, I press F12. Rather than reaching for the mouse to apply a style, I press F5, which puts all of the styles at my fingertips. And as they say on television, there’s much, much more!

All of these are small things, but those small things add up to big savings in time. And when you’re editing for a living, time is money.

So how much is an add-in actually worth? If it saves you an hour on a single project, it’s probably paid for itself. On the next project, it pays for itself again. And on and on, into the future. Seldom does such a small investment reap such big rewards.

Yes, this is a sales pitch, but I genuinely want you to succeed. That’s why I promote other add-ins like Rich Adin’s EditTools and Daniel Heuman’s PerfectIt.

These tools can make a real difference in how efficiently you work and how much money you can make. With that in mind, why not get them all, at a very special price?

Don’t think of these tools as an expense; think of them as an investment. Then the next time you need an editing tool, you’ll have it—and it will be the right tool for the job. Instead of dishing out tomato slices with a fork, you can use a tomato spoon! Instead of picking up bacon with your fingers, you can use a bacon fork! Using the right tool for the job makes all the difference in the world.

Jack Lyon (editor@editorium.com) owns and operates the Editorium, which provides macros and information to help editors and publishers do mundane tasks quickly and efficiently. He is the author of Microsoft Word for Publishing Professionals and of Macro Cookbook for Microsoft Word. Both books will help you learn more about macros and how to use them.

January 14, 2015

Dealing with Editor’s Bias

The one thing, aside from my being a professional editor and not just an editor, that I like to think I am is bias-free. Of course, that is more wishful thinking than reality.

Reality runs more like this: Every editor is biased. The important question is: Do I recognize my biases? If I do not recognize my biases, I fail to provide the quality and level of service my client pays me for.

Which raises another question: Is there a relationship between bias control and fee being earned? That is, is a high-paying client entitled to greater effort on my part to control my biases than is a low-paying client?

From the beginning — every editor is biased. We have subject-matter biases, client biases, and editorial biases, among a world of other biases. Client and subject-matter biases are easily dealt with: we simply do not (hopefully) undertake projects in areas we abhor or from clients we cannot stand. For most of us, the problematic area is editorial biases.

One of my editorial biases is “due to.” How I hate that phrase. Yes, it does have a proper place and use, and then it should be crowned king. But authors use “due to” to mean so many different things that it has come to represent the sign of a lazy author. The author may be brilliant — a genius in the field — but the author who uses due to as a substitute is lazy. And to my way of thinking, the editor who (speaking nonfiction, not fiction) doesn’t try to replace the vagueness of “due to” with the more precise and accurate term it is substituting for is even lazier than the author.

There are at least 20 alternatives for “due to” and each alternative carries important connotations and levels of precision. The point is that I know I have a bias against the use of “due to” and instead want more precise language used so that the reader does not have to guess at which alternative is meant.

I also prefer precision in time; I have a time bias. For example, I dislike when an author writes “in recent years” or “in the past 20 years.” Using this type of time reference allows the time to shift. The shift occurs because the reference was made when the author was writing the sentence, which could have been 5 years ago or 2 days ago, but doesn’t allow for the passage of time since the writing of those words, or for the editing and production time until publication, or for the book’s expected several-year shelf-life.

There are other words I have a bias against, such as “since” as a substitute for “because” and “about” as a substitute for “approximately.” Many of us also have biases when it comes to hyphenation (is it “co-author” or “coauthor”? “copy-edit” or “copyedit”?). I am aware of my biases and try to be judicious in my application of the biases. Where it doesn’t affect understanding or meaning, I weigh whether or not to act on my bias. Quite often that decision is made based on the subject matter and complexity of the book I am editing.

Yet, there is one more constraint on the exercise of my biases: Can I justify my decision to act/not act? Justification does not include “I like it better” or “It looks better to me.” Clearly, “due to” is liked better and looks better to the author. My justification for changing “due to” is grounded in clarity/precision versus vagueness/imprecision.

Yet, in discussions with colleagues, I find that the answer depends on whether what I view as editorial biases are viewed as a bias or as basic grammar/editing matters. That is, if the colleague believes that word choice is not a matter of bias but purely a matter of usage or grammar, the colleague sees no reason to either think about the issue or to exercise control. Thus, in the case of “due to,” the colleague would rarely, if ever, change or query its use. For such colleague, “since” is always properly used to convey the passing of time and as meaning “because.”

I asked earlier if there is a relationship between my control of my biases and the fee paid by the client. The answer is “no.” Regardless of how much I am being paid, I should always control my biases because my role is to help the author, not substitute for the author. From an ethical perspective, “no” is the only correct answer.

For colleagues who do not view these things as editorial biases, the question does not arise. It only arises for those of us who take the time to consider whether “since” is being used to convey a sense of time or as a substitute for “because.” It becomes an issue for us because the longer we take in deciding what “due to” is substituting for, the less money we will earn if we are on a per-page or project fee rather than an hourly basis.

A final thought: To do a proper editing job, we need to create and maintain a project stylesheet. It is appropriate to include in the stylesheet the “rules” we are following when it comes to our biases. Alternatively, we could insert a note, in the form of a query, at the first instance in which we explain the rule we are following. For example, the following could be used either as a note to the author or as a stylesheet explanation:

Although in today’s English “since” and “because” are considered synonymous, I adhere to the rule that “since” is used to express the passage of time, as in “since 2000,” and the terms are not synonymous. I adhere to this rule because I believe it makes your meaning both clearer and more precise, and considering the subject matter, clarity and precision are important tools for ensuring there is no miscommunication between you and your reader.

Do you recognize your editorial biases? How do you deal with them?

Richard Adin, An American Editor

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 1,429 other followers

%d bloggers like this: