Should You Use Writing Software to Edit?
In some of my MFA courses, I’ve brought up the fact that I use software to edit and gotten a mixed response from my peers. I want to address some things I’ve heard and raise suggestions for how to integrate software into the revision process.
What Can Software Do?
It’s important to bring up the limitations of software. Right now these limitations show little sign of changing, but this may be temporary.
Microsoft Word’s (in)famous spellcheck software is not the fullest extent of software. Modern editing software can perform much better. The old satires depicting incessant homophones and errors depended on unskilled writers using dumb systems.
Now, writers have access to software that can perform deep natural language processing and apply algorithms and perform analysis on text.
This doesn’t mean that such solutions are perfect. But the software’s gone from being a way to catch typos to something that serves a more similar role to a human writing partner.
With that said, none of the software I’m familiar with will:
- Provide a guaranteed improvement in the final text.
- Rewrite awkward texts for the writer.
- Be a massive source of improvement for an entirely unskilled novice.
When I taught English I often had students turn to these tools to get around their own skill limitations.
There’s nothing wrong with using these tools as assistance, but they cannot create a skill where none exists.
Paradoxically, I think a lot of these tools are better for near-professional writers seeking to improve the quality of their text. If you’re familiar with the mechanics of grammar but often fail in execution, that seems to be the best target market for most of these tools.
Electronic Efficiency
The way I use these tools most is to make my editing process go faster. It’s a way to make things that you might miss on a re-read stand out.
I’m an idiosyncratic writer, so I make a lot of weird stylistic choices. These systems provide an added benefit of discouraging some of those decisions that might throw off my reader.
The system I use, ProWritingAid, can catch things like passive voice and shoddy word choice. That goes a long way to highlight potential changes and catch subtle errors that you’d miss on an immediate re-read.
Although it is more beneficial to novice writers, that this can also often be separated down to the type of issues can highlight issues I need to improve on. For instance, I got into a nasty habit of repeating the same leading word in sentences. Being a little more conscious about that saved a lot of time in editing down the road, since changing the lead word of a sentence often requires a total rewrite.
There are many tools that can serve this purpose. The relative weight and price of each system can be a deciding factor. Hemingway is light and cheap, and more advanced systems may not have the same price-to-performance ratio for every writer.
Analysis In Scale
One feature of ProWritingAid is an analytics suite that can be tweaked for different writing and different goals. While I don’t break this out for everything I write, I use it to steer my goals as a writer.
This is what would take a lot of manual effort to do by hand, and sometimes a machine can provide insights that a writer wouldn’t think to analyze.
For instance, how often do you think about the level of particles in your writing? Me neither, and I’m a nerd.
I haven’t assessed all the different options here. I like ProWritingAid because it offers more systems than I’d ever care to use on a single text, but there are lots of alternatives. If you know what you’re looking for, it’s often possible to find a program that does one particular form of analysis very quickly and easily.
Again, there’s really nothing here that you couldn’t do by hand.
But the value factor is time. Sitting down and doing an analysis like this on a manuscript is a necessary process to improve as a writer. Having a system to do it for you can become a crutch, especially given the imperfection of software.
It’s faster to do it by machine, so a writer assisted by software can do deep-dive revision more often. Sometimes the machine can also highlight things that a writer might miss in their own revision process.
It’s a trade-off, and as with everything else, it’s necessary to develop the skill. The software just helps make using the skill less expensive in time and effort.
Fitting the Reader
Most writing software features readability metrics. Like most abstract metrics intended to interface with cognition, they need to be used with caution.
The reading difficulty levels that you get from, say, the Kinkaid-Fleischer rating do not always match the results for readers.
But there’s enough of a correlation to make the data worth having.
That provides a meaningful benefit to the writer. Knowing that a text is not obtuse helps to focus editing efforts in other directions.
Likewise, readability metrics show poor writing. Extensive run-on sentences will show up in these ratings. If you know the formulae used in determining the rating, you can often make better decisions about your writing with its factors in mind.
There are something like a dozen different ways to assess reading difficulty, and each has their merits and flaws. The one system I discourage writers from using is the Lexile metric, which is a black box system. If you can’t find the meaningful factors in such a system, it limits your ability to use them to improve your writing.
That doesn’t mean that you want your writing to be dumbed down. One use for the difficulty metrics is that you can often tailor them to your audience. Avoiding extreme spikes in difficulty is a measurable improvement regardless of the acceptable threshold for your audience.
Wrapping Up
Writing with software is a lot like writing without software. It depends on human skill. There’s no substitute for practice and technical knowledge with grammar and language.
But everyone makes errors. The software serves as a layer between a writer and mistakes. By catching issues more readily than a writer might themselves, they provide a second dose of efficiency. They can also function like a coach, pointing out areas for improvement.
They are inferior to a skilled human at this task, but are cheaper at scale and give feedback quickly.