Easily Identify AI-Generated Text: 6 Simple Methods
How to Identify AI-Slop in Generated Content
Recently, I took the time to explain to one of our corporate clients how to swiftly recognise “AI-Slop”. This term refers to AI-generated content that lacks substance and value. We delved deeper into the world of AI text generation and concluded that, while it’s essential to utilise generative AI tools such as ChatGPT, Google Gemini, and CoPilot, it’s equally important for staff to meticulously review every word, fact-check, and make necessary corrections before submission.
Recognising AI-Generated Text
A surprising number of students, website creators, product developers, and employees are simply allowing generative AI tools to produce text without any refinement. Here are some straightforward techniques to help identify content created by AI.
The Em Dash Red Flag (—)
This is a significant indicator. It’s one of the most common mistakes made by large language models (LLMs) and akin to leaving your wallet at a crime scene.
The Indicator: Many AI systems tend to misuse the “em dash” (—) instead of a comma, hyphen, or a simple double hyphen (–).
Take a moment to find this character on your keyboard: —. It cannot be produced without specific codes. A human writer, particularly when typing quickly or using a standard web content management system, will overwhelmingly opt for the single hyphen (-) for number ranges. This is because the em dash requires memorising the Alt code (Alt + 0151) or using an unusual keyboard shortcut (Ctrl + Alt + Minus in MS Word).
AI models, trained on countless professionally edited and punctuated texts, default to the most grammatically correct yet the hardest to type punctuation. If you come across web content laden with that sleek, long, hard-to-type em dash, chances are an AI is responsible. A human would have simply pressed the hyphen key.
The Absence of Anecdotes
Humans tend to write with messy, specific, and sometimes slightly inappropriate personal stories that ground their content. AI-generated texts, however, lack this detail.
The Indicator: If the content steers clear of specific anecdotes or client stories and resorts to generic, high-level examples, it’s likely stimulated by AI.
A human discussing cybersecurity issues might say, “Just last week, our largest client dismissed their CTO following a hack that ignored last year’s auditor’s warnings.” Such details are too specific and carry too much liability for an AI model, which is trained on safe, aggregated data. If the writing remains at a high altitude, offering only abstract concepts like “many organisations fail to update security patches,” it’s a strong indication of AI involvement.
The Emoji Overload 💣
When reading a serious webpage, technical report, or formal email, how many emojis do you expect? Typically none, or perhaps one for a quirky touch.
The Indicator: Excessive emojis in contexts that should be informative or analytical are a tell-tale sign.
Chatbots like CoPilot and ChatGPT are designed to engage, often leading to the inclusion of emojis in professional writing. For instance, seeing phrases like:
This is the first point 🐝
This is the second, and it’s more important ‼️
The conclusion is here 💥
indicates AI authorship. A professional human writer typically avoids cluttering content with random emojis.
The Uniformly Perfect Language
Human writing is inherently textured, often featuring humorous remarks, occasional typos, and a unique style that lacks excessive adjectives.
The Indicator: If every sentence is grammatically flawless and employs unnecessarily complex vocabulary, it’s likely generated by AI.
AI often writes to a high, formal standard and shuns contractions. You might frequently find “utilise” rather than “use” or see formal transitional phrases like “Furthermore” and “In conclusion.” If the text reads as if it condensed multiple Wikipedia entries into one polished block, the fingerprints of machine-generated content are evident.
The Overuse of Lists
AI loves to organise information—after all, it’s a machine. It tends to convert unstructured ideas into tidy lists, often employing bullet points or numbered formats excessively.
The Indicator: If you encounter an abundance of bullet points breaking up what should flow in natural paragraphs, it’s a signal of AI output.
A human writer typically builds nuance and context across several paragraphs before possibly summarising with a list. In contrast, AI, when instructed to be “easily scannable,” often defaults to list structures for every concept. If you see a major point followed immediately by a bulleted summary, it’s probable AI is behind it, favouring clarity at the expense of flow.
The Fact-Checking Fiasco
This is a significant concern and one of the newest tells of AI content. Unlike humans, AI models predict rather than possess knowledge, and they can confidently fabricate information.
The Indicator: Watch out for confidently stated facts, statistics, quotes, or citations that cannot be verified or outright do not exist.
If content references a nonexistent expert or refers to a “study by the Department of Technology” without any verifiable source, pause and investigate. An AI is programmed to provide answers; if it cannot find relevant information, it will generate plausible-sounding yet fabricated responses. A human recognises their limits, while AI fills in the gaps.
Conclusion
When looking for signs of AI in your text, focus on the symptoms of artificial perfection and functional shortcomings rather than poor grammar or vague content.
AI’s telltale signs aren’t errors; rather, they represent an over-correction. Look for the excessive use of em dashes (—), the absence of personal anecdotes, the presence of extra emojis, the overly polished language, and the reliance on fictitious facts.
Identifying AI-generated content is no longer about complex detection software; it’s about recognising the consistent habits that machines struggle to break.


