top of page
Search

You Can't Handle the Proof: AI Editor Fails a Simple Test

  • Writer: Phil Carlucci
    Phil Carlucci
  • Nov 27, 2024
  • 2 min read

Updated: Dec 19, 2024

A recent conversation in a writing forum about reducing self-publishing costs by using AI editing software rather than real-life professionals prompted me to put one of these AI editors to a basic test.


Human editors, of course, are typically given an editing test when applying for a job at a media company or publication, and in the freelance world they have to sell themselves through sample edits. It's only fair then that an AI editor should be expected to pass one as well. At the very least it should perform better than the previous test I gave ChatGPT.

ree

This assignment was just as simple — take a portion of a recently published and slightly amended magazine article and fix the mistakes that an experienced professional editor would eliminate. The test-taker was AI software called editGPT.


The editor completed a quick scan and displayed its results, with red marks indicating my "errors" and green ones indicating its recommended fixes, much like Track Changes in Word. It was a passable first effort even though I disagreed with and rejected its input on my use of a few commas and hyphens.


Next, I inserted some common errors to see what it would catch. In a paragraph introducing a speaker (Smith) ahead of a direct quote, I misspelled his name (Smyth) in the second reference. The editor caught it. Not bad.


So instead of misspelling the name, I threw in another common error — mistakenly referencing the wrong person, as in:


Longtime golf writer Anthony Smith played the new course during a media preview last October and was impressed by the level of thought put into the layout. “It’s really hard to design strong golf holes in a space this compact,” Jackson says. “This will be a huge success.”

Uh oh. This one slipped by.


Then instead of the incorrect Jackson, I substituted the first name (...this compact," Anthony says) to see if AI would have any thoughts on the informality. Slipped by again.


Now I added in some basic flaws that I used to see often while working as a sports editor, specifically instances where numbers don't add up. Like this one:


"Judge had three hits in the doubleheader — two in the opener and two more in the nightcap."


Interestingly, the editor missed it in the first two passes, but on the third pass it changed the sentence to:


"Judge had three hits in the doubleheader — two in the opener and one more in the nightcap."


I hesitate to call that a win for editGPT since it has no idea which of the figures is actually correct. Did Judge have four hits in the doubleheader? One hit in game one and two in game two? A human editor would ask the writer or, more likely, check that fact on their own.


I decided to throw the AI editor a bone.


"Because, as everybody knows, 2 + 2 = 3."


Nothing. On multiple passes.


There were more areas where editGPT fell short, but they aren't worth analyzing. The results are already in.


AI editors are nowhere close to standing in for sentient professionals, and writers who choose AI as a cost-cutting measure put the quality of their published work at significant risk.

 
 
 

Comments


bottom of page