How to trust what you read online and tell if it’s AI or human
I got this note from Ben in Texas. “Hi there, Kim. I love your podcast. You were talking about AI and I got to thinking. When I read a story online at some website, how can I tell if a human wrote it or some bot?”
Ask USA Today. Last week, a bunch of mysterious bylines (WashPo, paywall link) with stories suddenly appeared on its site. Did these writers have a pulse?
Staff writers at Reviewed spoke out that management published stories written by AI under the names of non-existent humans. They couldn’t find these writers with any other bylines or social media profiles, not even on LinkedIn. Of course, the parent company, Gannett, denies it all.
AI lies
When reading something online, especially at a big site, you want to trust that what you get is the truth. But AI makes things up. Did you hear about the law professor who was accused of sexual harassment? AI made up the whole story.
Humans code AI algorithms, folks, and we’re all full of opinions and biases. When you read an AI-generated article or social media post, remember that you’re actually getting a spoonful of someone else’s viewpoint. It’s like a game of digital telephone, and sometimes, you only hear one side of the story.
I know it’s a lot to think about. Let’s start with identifying what’s AI-generated and what’s not. I’ve got your back with the telltale signs a chatbot made that article or webpage.
It wants to sound important
Remember back in school when you were trying to fill a word or page count? You see the same information repeated over and over … and over, with only slight changes in the phrasing.
Keep an eye out for vocab words that are unnecessary and eye-rolling transitions like “Moreover,” “Consequently” and “Furthermore.” That’s not a kid at his first journalism job — it’s a telltale sign of a bot in the bytes.
Chatbots don’t do analysis
AI can state facts, but it cannot talk about how that impacts real life. A human-written celebrity gossip piece would end with something like, “Kim Kardashian dieting for months to squeeze her butt into the 60-year-old Marilyn Monroe dress proves she’ll do what she must to get attention on social media.”
A human writer will draw a meaningful conclusion. If an article is just spouting statements like “Kim Kardashian wore a dress that Marilyn Monroe owned,” it might be AI.
Quotes and numbers don’t pan out
AI can write quotes and cite numbers like nobody’s business! As CNN pointed out, when chatbots are asked to write an article with quotes, they (hilariously) make up names like John Doe and Jane Smith. Not so hard to spot.
AI is also really bad at quoting real-world figures. If an article gives a percentage, ratio or amount, copy and paste that thing into Google. If a chatbot wrote it, there’s a good chance you won’t find any other evidence.
There’s no personality
Chatbots really struggle with humor. The result is often bland writing without an interesting perspective or take. If you find yourself thinking, “Wait, this website used to have a lot more humor or wit,” AI writers may be taking over.
If you think, “I wish Kim would stop making those bad jokes,” congrats, you’re getting an email written by me, real-life Kim Komando.
Keeping an eye out
Since ChatGPT launched last November, phishing emails are up 1,265%. That’s not a typo! AI chatbots are popping up in new corners of the internet every day. And that’s not a bad thing. Like this handy use: AI assistants can scour long articles, research for us and sum up the main points.
But remember, AI has been found to hallucinate (that’s the real term for it) statistics, legal cases, names and science. It just makes crap up, well, kinda like humans do.
Tags: algorithms, chatbots, Google, internet, phishing, trust, writing