Anxiety, confusion and a whole lot of panic – these seem to be the three unofficial but unequivocal companions of artificial intelligence. Tethered by an invisible but unbreakable thread, people seem to be unable to move beyond the promise and potential of AI and its apparent intrinsic link to great risk. This is simultaneously absolutely warranted and also completely overhyped – there’s no doubt about the fact that AI is a double-edged sword, but the fear surrounding its implementation has become almost dystopian in nature.
As we are well aware, fear makes people do weird things. We react in a way that is less aligned with logic and reason and more of a semi-subconscious attempt to mitigate risk and avoid an expectation of impending peril. And that’s exactly what’s happening in the context of AI – specifically, the use of AI in written work.
One of the biggest splashes made by AI in its most recent introduction to mainstream technology was its use in writing and generating content. Suddenly, instead of having to spend hours brainstorming creative vocabulary and carefully crafting sentences to make them sing, consumers and businesses alike can simply provide AI chatbots with prompts, and in the blink of an eye, something that would’ve taken them ages to create pops up on their screen in mere seconds.
Needless to say, our minds were blown, and almost immediately, our virtual world had grown exponentially and was full of possibilities. People who disliked language and writing were suddenly able to communicate quickly and professionally, and those who already had the skills could generate oodles of content almost instantaneously.
It wasn’t long, however, before excitement turned to trepidation, and fear emerged like a dark cloud enveloping blue skies.
From Wide-Eyed Wonder To Monsters Under the Bed
The shift was dramatic, and the turn was quick.
Somehow, almost overnight, we’d gone from sheer admiration of AI chatbots to a paralysing fear of its potential – and not just its potential, but its incredible ability to mimic human writing. What started off as, “wow, it sounds just like a human!” quickly became, “oh no, it sounds just like a human“. And, at the core of this, the real concern was our ability (or rather, potential inability) to separate one from the other. If AI can generate written content that sounds like it was written by a real person, how will we tell the difference?
This question quickly dominated the conversation – of course, it didn’t, by any means, stop the progress of AI and the improvement of the technology, but it did mean that all the incredible things that were done were somewhat overshadowed by skepticism and concern (to different degrees, depending on who you asked).
But, while fear and anxiety about whether or not we’d be able to separate human-generated content from AI-generated content ran rampant, there was a significant philosophical question we seemed to have neglected, in a broader sense.
That is, does it even matter?
Very controversial, no doubt about it. Many people will eagerly and boldly assert that being able to tell the difference between something that’s been written by a human or an AI chatbot is absolutely essential in maintaining our humanity, authenticity and originality. Some maintain that the distinction allows for more effective quality control.
On the other side of the argument, AI fiends and fanatics (and also, just ordinary, level-headed people) argue that if the quality is high and the content that is produced is accurate and useful, it shouldn’t matter whether it was written by Jane next door or a complex programme powered by artificial intelligence. Put crudely, if it works, well, who cares?
Regardless of the merit of either argument, the fact is, people are worried about AI, and whether or not it’s logical or reasonable, they want to be able to detect AI-generated content and human writing. But, as we’ve quickly come to learn, that’s no easy task.
The AI Detection Dilemma
As quickly as AI chatbots emerged, so too did so-called “AI detection” programmes, claiming that all you had to do was input the suspicious text and the software would be able to tell you, by means of a percentage, whether or not it was written by a human or generated by AI.
Of course, it wasn’t long before it became clear that these programmes were pretty ineffective, and to be honest, it’s neccessarily even their fault entirely. In my opinion, the crux of the issue is that the fundamental principles upon which they’re based and the problem they’re attempting solve are the very reasons why these detection programmes can never be accurate.
AI chatbots are constantly improving, moving towards the ultimate goal which is to product text that is indistinguishable from human writing. AI detection programmes are intrinsically reactionary – they exist purely to evaluate what is produced by these chatbots, and so the parameters they are using to make judgements are based completely on AI content. They have to identify “markers” of AI slop and these are weeded out by means of pattern detection – for instance, if chatbots tend to use a specific quite frequently (for instance, one that often comes up is “delve”).
But, firstly, AI chatbots write the way they do because they’re trying to mimic human style. So if a chatbot is using the word “delve” a lot, for example, it’s because it’s found that habit within the data sets it’s been given. And, if AI detection tools use things like that as their main red flags to indicate the use of AI, how are they discerning between a chatbot copying the way a human uses a word and a real human who just tends to overuse a specific word? Well, they can’t.
And, the issue that goes on from here is that the chatbots are constantly progressing and improving, so their “style” is ever-changing. Due to this, the detection programmes need to keep up, and they’ll start finding more “indications” of AI use – that are actually just habits being mimicked from human writing – that start to be flagged. But essentially, they’re just highlighting things that humans do and ways in which they write that these AI programmes have managed to copy.
Ultimately, the more this goes on and the longer the push and pull happens between AI chatbots and AI detection programmes, chatbots will be producing infitintely better quality writing that is incredibly close to human content and detection tools will still be flagging it as AI generated. The problem, though, is that through all of this, these detection tools will be (and already are) flagging content that is being written by people. Because the fundamental principle behind the model is flawed.
So, long story short? AI detection models are doomed to fail, and that’s part of the reason why people have started to adopt a new, potentially more problematic, approach: that is, do everything you can to avoid habits adopted by AI chatbots so that your writing will stand out as authentic.
Sounds great, I know. But whaat does that actually mean? Well, becasue the things being flagged by detection tools as an “AI red flag” are actually almost always just very normal styles of writing, specific words and even particular punctuation marks, we’re now moving into territory in which we risk completely transforming the way we write (for the worse) in order to avoid being donned an AI con-artist.
Basically, we’re so scared of the perception of having used AI to generate content rather than writing it ourselves, that we’re willing to ruin the quality of our content in the process of seeming authentic.
Perception over authenticity and true quality.
Write Like a Robot To Prove You’re Not One
Sound crude? Well, in my opinion, the idea of having to “write like robot to prove you’re not one” is as crude as it is real. The notion has now gone beyond subtle changes in writing style to become a mandated list of “dos” and “don’ts”. There are a few things that have made the list, but the problem, it seems, is that the list is growing, and not only that, it’s starting to include things that are have always been major parts of language.
The most recent “red flag” – or “AI indicator”, whatever you want to call it – is supposed to be the em dash. And you know what, straight off the bat I’ll agree that chat bots, ChatGPT especially, do seem to enjoy a good em dash. They’re sprinkled into content a little more frequently than I, personally, would prefer.
But, it’s not using it incorrectly. In fact, for many people, this use of the em dash may stand out because they simply don’t use it as a punctuation mark – whether that’s because they don’t like it and it’s not really part of their writing style, or because (I think in many cases) they simply haven’t been trained to use it. So suddenly, the em dash is the hidden weapon of ChatGPT and ought to be avoided at all costs!
The reality, however, is that this is just the latest example of how AI chatbots are progressing and improving in their efforts to mimic human writing.
So, does this mean we should stop using the em dash altogether, because some people may take that as a clear sign that you’ve used AI to generate your content?
No, absolutely not. And to be perfectly frank, I think that to do this would be a fundamental failure to maintain our humanity. To randomly discard a whole punctuation mark because of an AI trend is utter madness. And what’s worse? This will just be the beginning. What’s next, the comma? No more capitalisation because ChatGPT capitalists sentences too well? Fewer paragraph breaks because Grok has gotten too good at separating ideas?
No.
The answer is: keep calm. Keep writing. And don’t let the fear of misperception make you become that which you aim to avoid.
Our writing is human because it’s being written by humans, and there’s so much more to it than punctuation, grammar and syntax. It’s creativity and it’s personality, and for now, those things are still ours.