It’s happened, folks. I have been ChatGPTsplained.
The unpleasant exchange took place on a social media site whose name I shan’t mention. It went something like this (paraphrased):
Me: Working on a white paper this morning, which prompted me to shake my cane and yell at capitalism [link]
Them: Get ChatGPT to give you a rough draft, then you can embellish and edit.
Me: I’ve played with ChatGPT enough to know it can’t be trusted with the kind of writing I do.
Them: Yes, but it goes give very good structure that you can then edit to your own needs!
…No. It doesn’t.
ChatGPT produces generalized Internet filler content middle-manager waffle. I’m well-versed in generalized Internet filler content middle-manager waffle. I began my writing business in 2009 selling generalized Internet filler content middle-manager waffle.
I have since moved on from generalized Internet filler content middle-manager waffle, but I still know it when I see it. And it’s what ChatGPT produces – only not as well as the humans who currently do it for a fraction of a cent per word.
ChatGPT: Nobody, Writing Nothing, for an Audience of No One
As an example, I chose a topic on which I produced generalized Internet filler content for so many years, and in such vast quantities, that the chances of my actual writing being included in ChatGPT’s training data are quite high:
What are the elements of negligence?
Negligence is a legal concept that is used to determine whether a person’s actions or inactions can be considered a breach of duty of care owed to another person, resulting in damages or harm to that person. The elements of negligence are:
- Duty: The defendant (the person accused of negligence) must have owed a duty of care to the plaintiff (the person who suffered harm) to act with reasonable care under the circumstances.
- Breach of duty: The defendant must have breached their duty of care by failing to act with reasonable care, such as by failing to take necessary precautions or acting recklessly.
- Causation: The defendant’s breach of duty must have caused or contributed to the plaintiff’s harm or injury.
- Proximate cause: The harm suffered by the plaintiff must have been a reasonably foreseeable consequence of the defendant’s actions or inactions.
- Damages: The plaintiff must have suffered actual harm or damages as a result of the defendant’s breach of duty and the harm must be quantifiable and compensable.
Even this requires editing before submission. It’s not wrong, but it’s not quite right, either. “Reckless” behavior is typically treated separately from “negligent” behavior, for instance. “Proximate cause” is a type of causation, not a separate element (“causation” includes both cause in fact, or “but-for” cause, and proximate cause or “foreseeability”).
This example also has no clear audience – no group of people that would find it suited to their purposes. An ordinary person who has been injured is likely to stumble a bit over the wording, especially the repetition of words that don’t describe the same legal concept (“damages” and “damage,” for instance). Law students are going to be misled, figuring out very quickly that ChatGPT’s formulation was a waste of their time. Lawyers wouldn’t ask this question in the first place.
But maybe ChatGPT just needs some guidance as to audience? I asked.
Give me a definition of negligence appropriate for a person who has recently been injured.
Negligence is a legal term used to describe a situation where someone failed to take reasonable care, resulting in harm or injury to another person. In other words, if someone was not careful enough and their actions or inactions caused you harm or injury, you may be able to pursue a legal claim for negligence. Negligence can take many forms, such as inattentiveness, carelessness, or recklessness, and it can result in a wide range of injuries or damages. If you have recently been injured and believe that someone else’s negligence was the cause, it is important to consult with a qualified attorney to understand your legal options and potential remedies.
This is even more waffle-y than the first. It fails to define negligence in any but the most basic terms. In fact, I’d bet anyone asking what “negligence” is already knows it involves one person failing to take due care and someone else getting hurt as a result. This reads like every final filler paragraph on every law firm website in America – which is probably what it is.
But Can ChatGPT Write The Thing I Got Splained For?
The piece I got ChatGPTsplained about wasn’t law firm filler content about negligence. It was 2400 words on dynamic QR codes, inflation, and printing costs.
So I asked ChatGPT to “give me 2400 words on dynamic QR codes, inflation, and printing costs.” It gave me 685, which I’m assuming has to do with the limits of the tool.
The first paragraph was utterly worthless:
Dynamic QR codes, inflation, and printing costs are three seemingly disparate topics that have a surprising amount in common. In this article, we will explore the ways in which these three topics intersect and how they can impact businesses and consumers alike.
This is filler. 100% fluff. This is every first-year college writing student desperately hoping to make their word count. This is a waste of ChatGPT’s time to generate and my time to read. Yet here we are.
[The rest of ChatGPT’s effort, unedited, is here.]
The rest wasn’t much better. Among ChatGPT’s most common writing problems:
Consistently opening a paragraph with one statement or topic, then completing it by discussing a different topic or idea. For example, ChatGPT begins a paragraph with “Another potential challenge with the use of dynamic QR codes is the impact of inflation on their effectiveness,” then dedicates the rest of the paragraph (three additional sentences) to a general definition of inflation.
Connecting ideas that aren’t actually connected. ChatGPT made several references to “using dynamic QR codes to manage inflation.” Each of these treated “dynamic QR codes” as if they are a government policy approach rather than what they actually are (a barcode).
Failing to distinguish between a specific concept and its general category. We saw this above when ChatGPT failed to grasp that “proximate cause” is a subheading of “causation” generally. ChatGPT also repeatedly used the phrase “dynamic QR codes” when discussing things any QR code can do, like include a website URL. If you’re trying to sell the concept of dynamic QR code use specifically (as the piece I was drafting did), using “dynamic” when you mean “all QR codes generally” undermines that effort.
Suggesting things that aren’t real or just won’t work. According to ChatGPT, one problem with using dynamic QR codes “to manage inflation” is that hackers might create fake ones to “generate misleading data.” While hackers can, I assume, actually do that, I struggle to imagine a real-world reason why any hacker would bother. Faking a government-issued QR code to steal your personal data? Sure. Faking one to poison a government data set on inflation? …why?
Offering “structure” that isn’t. When I started this experiment, I did not expect ChatGPT to provide a result that so neatly kneecaps the ‘splanation that ChatGPT would provide “very good structure.” But it did, in the final paragraph: “In conclusion, dynamic QR codes, inflation, and printing costs are three topics that are interconnected in complex ways.”
Like the opening paragraph, this sentence is first-year writing student waffle, but that’s not even its worst sin. Its worst sin is that the rest of the piece did not discuss this. The rest of the piece doesn’t deal with the interconnectedness of these topics, much less the “complexity” thereof.
The purpose of a conclusion is to summarize what was said so the reader has a clear, concise takeaway – a one-sentence built-in personal Cliffs Notes, if you will. ChatGPT’s attempt at a conclusion is akin to me ending this post with “In conclusion, many lawyers are left-handed.” It’s true, but it never came up in what you just read, and it’s going to confuse you.
In short: ChatGPT saved me no time at all. It wasted my time and annoyed me. Because it’s not good at writing.
No Worries Then, Right? …Right?
ChatGPT can’t do what I do. Not even close. But that doesn’t mean it can’t replace any writers at all.
I do think ChatGPT will come for generalized Internet filler content and soon. My request to “write a landing page for an auto parts store” gave me copy equal to or better than what I could already order for a penny per word from some agencies:
Welcome to our extensive selection of high-quality auto parts at competitive prices. At our store, we have everything you need to keep your vehicle running smoothly and safely. Our inventory includes a wide variety of parts for all types of vehicles, from brakes and engine components to suspension and electrical systems.
Whether you’re a professional mechanic or a DIY enthusiast, you’ll find everything you need at our store. Our knowledgeable staff is always available to help you find the right parts for your vehicle, answer any questions you may have, and provide expert advice on installation and maintenance.
Find and replace “our store” with the name of your store, drag and drop into your website editor, click Publish. Done. No additional underpaid humans required.
But ChatGPT couldn’t give me a single usable sentence in 685 words for a project of the type I turn out every single week. When I asked it to discuss a topic for which I recently submitted an academic book chapter proposal, ChatGPT spent 403 words being merrily 100% dead-on wrong.
Tl;dr ChatGPT only sounds like it can write to people who can’t write. My advice, to paraphrase Mark Twain: “Tis better to write nothing and let people think you are a fool than to copy-paste from ChatGPT and remove all doubt.”
You are, in fact, paying a human if you drop a tip in the tip jar here. Thank you.
You must be logged in to post a comment.