Enough About Em-Dashes … AI ‘Hallucinations’ Are a Much Bigger Issue


I was all set to write a post on the uproar over whether em-dashes are a telltale sign of artificial intelligence-generated content—I have strong feelings about this—but there’s a much more troubling AI indicator out there.

As you can probably tell, I have no problem with em-dashes in moderation. AI is trained on content written by human beings and many writers happen to use em-dashes. I believe people seized on the idea that em-dashes are a sign of AI because they’re worried readers can’t distinguish between content generated by AI and content written by human beings. So they malign punctuation that does nothing more than separate information or a thought from the rest of a sentence.

I’m far more troubled about a documented marker of AI-created content, a phenomenon known as AI “hallucinations” that’s cropping up in chatbot-generated content. “Hallucinations” is a euphemism for “making stuff up.” Or, as OpenAI, the company behind the generative AI chatbot ChatGPT, calls it, “a tendency to invent facts in moments of uncertainty.”

AI tools aren’t infallible and neither are human writers and editors. But AI doesn’t “think” like a human being. It merely predicts and generates text based on preexisting patterns in the data it’s given. If there are gaps in the data, AI tools fill in the gaps with the data it has—which is where hallucinations come in. If an AI tool can’t find the right information in the content it’s trained on, the tool will invent a response to a query. For example, when the magazine Fast Company asked ChatGPT to draft a news story about Tesla’s quarterly earnings, the chatbot produced a grammatically correct article that cited completely random financial figures.

AI hallucinations have resulted in real-world repercussions. In July, a federal judge in Denver ordered two attorneys representing MyPillow CEO Mike Lindell in a defamation suit to each pay $3,000 for submitting legal documents citing cases that don’t exist. As NPR pointed out, it’s fine for attorneys to use AI tools, but they need to ensure the claims they make in court are grounded in the (real) law. A $3,000 sanction for not catching AI errors in court filings might be just a slap on the wrist, but the damage to the law firm’s credibility could be considerable.

The prevalence of AI hallucinations is actually increasing, too. OpenAI and other companies’ efforts to refine how their AI products operate, is resulting in more hallucinations. Even worse, AI tool developers don’t know why, according to The New York Times. The CEO of a start-up AI tool developer stated in the article that he believes generative AI is always going to make mistakes and hallucinations will be part of the package.

So, who’s fact-checking AI-generated content? Media outlets, universities, libraries and content creators, including marketers, are issuing guidelines for verifying the accuracy of generative-AI content and research. Guidance includes: double-checking all facts using credible sources; ensuring information is up to date; looking for inconsistent statements; and consulting experts on specialized topics. This is what writers and editors routinely do as part of their work processes.

Generative AI has been touted as a valuable tool for writers for its ability to speed up research and the creation of first drafts. In their current forms, AI writing tools must be monitored closely to catch inaccurate or fabricated information. No matter who or what creates content, errors erode an audience’s trust in both the writer and content provider. Factual mistakes devalue the content’s message, regardless of whether the mistakes are silly or serious. Whether such errors get corrected before the content goes live remains up to the humans in charge.

Featured Articles

Next
Next

Marketing and Business Development Are Different – and Lawyers Need to Do Both