6 Comments

My day job is a copy editor at a daily newspaper. Although I spot signs of AI writing, it’s quite harder to be doubly sure. That I even find the difference at all is thanks to the way Indians write in English Vs Americans or Brits. For instance, we don’t say “individuals” often. We prefer the word “person” or “people”. Since ChatGPT uses “individuals” a lot, I know that a colleague has actually used the AI tool when I spot “individual” in their article. But then, I also know that if you keep reproducing the content with prompt for changes, you can completely camouflage AI writing. Also, it’s quite hard to dispute if someone says that the original idea behind an article is their own and they have only taken the help of AI tools to express it correctly.

Expand full comment

Great context. Thanks for sharing.

Expand full comment

I've posted similar questions to vendors who claim to have similar results in identifying AI-generated resumes and cover letters. Again no responses. Complete snake oil in my opinion.

Expand full comment

I agree.

Expand full comment

Also, am going to cite this post in my piece on the ethics of disclosure or non-disclosure of using AI tools in writing.

Expand full comment

Thanks. I am generally neutral about that topic. In some cases, it will be deemed important but in most I don't think it will. Some journalists have the luxury of review and polish by editors. For other journalists, ChatGPT may provide that service. I am particularly interested in how these technologies can help non-native speakers of the language they are writing in.

As you have probably guessed, I'd like to see disclosure of the data that supports claims that tools can identify AI written text. There are existing protections in many countries against making false claims. Is there an ethical requirement to proactively disclose evidence of efficacy.

Expand full comment