Nearly everyone who writes or edits for a living knows instantly what’s wrong with ChatGPT, the free AI tool that uses natural language input to produce text, answering prompts you give it. No, it’s not that it’s going to take our jobs. The problem with having ChatGPT or any other AI write articles is that it will be wrong or do a poor job, and it will lead to lawsuits.
Take the latest drama at CNET and Bankrate, two websites owned by Red Ventures that ran AI-generated content as informational articles without being transparent about it. Once readers noticed a small disclaimer and uncovered that bots had been doing the writing, internet backlash ensued. A few days later, according to The Verge(Opens in a new window), leadership at CNET told staff that the publication would pause its use of robots to write stories, implying it would resume once the hubbub had died down.
Somehow it got worse. Jon Christian, who has been covering the issue superbly at Futurism, noted that some of the CNET articles contained not only factual errors(Opens in a new window) but also plagiarism(Opens in a new window).
The most painful part of the CNET debacle for me is that any writer could have seen it coming. The Verge’s reporting says that many staff were never told about the use of AI to write content. Perhaps some good will come out of this mess if it causes other publishers and businesses to take seriously the severe shortcomings of AI-written text, though given the response from CNET’s leadership, I worry they won’t.
Not Everything Should Be Automated
Behind the doors of any publication are people who write and people who try to make money. One group looks at tools like ChatGPT and sees potential value: How can we use this tool to be more efficient, turn a higher profit, automate something that’s routine? The other group knows just how hard it is to write content that’s original, accurate, and based on reliable sources. What seems automatable to one group is so obviously not to the other.
Most businesspeople know better than to say a machine can replace a writer, full stop. But some would and do say a machine can replace a writer for some kinds of writing. That mentality devalues writers and is shortsighted in understanding what writers do. It’s also disrespectful to readers. In the case of CNET and Bankrate, choosing to auto-generate articles about personal finance shows a lack of care, if not disrespect, to people who need help understanding their money.
AI can glean from other sources all it wants, but it can’t draw intelligent conclusions about trends or history that haven’t been drawn before. That’s a huge part of what writers do.
It goes beyond writing articles. An example that would pertain to the kind of pieces we publish at PCMag is product pricing. What if we automated pulling the nuts-and-bolts details about a product to give readers the information? Ask the people who manually do this work, and we’ll tell you that even something as seemingly straightforward as a product’s price is never so simple. When a company says a service costs $9 per month, for example, it might bury the fact that really you have to spend $108 in one lump sum because the quoted price is based on the average that you would pay if you were being charged monthly, which you’re not. That’s the kind of detail product reviewers go to great lengths to get right.
Or consider A/B price testing. Companies that sell software-as-a-service are notorious for offering different prices to different customers, sometimes at random, to gather information about how much people are willing to pay. An experienced product reviewer or service journalism writer knows how to spot and fact-check these issues to write about them appropriately. They also care that the reader has the most accurate information. AI does not.
The more surface-level showstopper for AI is that it cannot replicate other very human elements that go into writing, such as reporting, doing hands-on tests, or having the breadth and depth of experience. AI can glean from other sources all it wants, but it can’t draw intelligent conclusions about trends or history that haven’t been drawn before. That’s a huge part of what we writers do. Some may even argue that a writer’s heart and soul goes into their work, which AI lacks, although I personally wouldn’t go that far. I’ve written my fair share of dry, uninspired pieces, and they have their place as long as the content is factual (AI is failing quite publicly at that now) and gives the reader something they need or want.
Lawsuits on the Horizon
Lawsuits are another real concern. When you let ChatGPT cruise the internet openly for information, it doesn’t provide a list of the sources it used. As Futurism found, AI bots know to reword or rephrase a chunk of content instead of repeating it word for word, but they do so about as well as a seventh grader. Publishers who let ripped-off paragraphs go out into the world without attributing the source are opening themselves up to legal action.
I imagine that educators, especially those familiar with TurnItIn, can easily spot these awkwardly reworded texts most of the time. TurnItIn, which was founded in 1998 (PCMag reviewed it almost 10 years ago), is a service that compares the supposedly original writing of a student with content published online and all other papers submitted to TurnItIn. That way it can identify plagiarism from published works as well as other students’ writing no matter where they are in the world. TurnItIn analyzes for both word-for-word plagiarism as well as text that has been altered slightly but is definitely not original. It can do more, too, like advise students when they rely too heavily on quotes for their papers.
A student who submits a plagiarized paper may fail the assignment or have to face an ethics board. Media outlets that publish cribbed text will get slapped with lawsuits.
Educators pick up on the style of text that’s been lightly reworded but stolen from somewhere else because their ears are attuned to it. I feel the same way about the majority of text I’ve read from ChatGPT. Even when you ask the bot to write in the style of a particular person or outlet, it sounds stilted—robotic, even. As Bloomberg(Opens in a new window) points out, OpenAI, which makes ChatGPT, says its systems “do not have the ability to produce human-like speech,” despite the article gushing earlier on that it “mimics human prose.” And sure, “speech” and “prose” are not the same, but the point is the syntax and style aren’t human, and it shows.
A student who submits a plagiarized paper may fail the assignment or have to face an ethics board. Media outlets that publish cribbed text will get slapped with lawsuits. Perhaps more importantly, it’s unprofessional, it undermines staff writers, and it demolishes the reputation of the publication.
A Blatantly Bad Idea
None of this is to say that AI can’t be beneficial to writing somewhere, somehow. TurnItIn certainly has its issues, but it’s good at helping educators spot plagiarism and guiding students who do it unwittingly to learn better. Grammarly is another decent example—it doesn’t make a skilled writer better, but it’s extremely beneficial for catching simple errors and helping certain groups, such as writers who aren’t fluent in a language. AI writing bots are tools. They can be useful if we find what they are good tools for. Writing informational, public-facing content isn’t it.
To outsiders, I can see how writers warning about the dangers of ChatGPT and other AI writing assistants may come off as insecurity about their jobs or alarmist. But writers are so opposed to it not because we’re afraid, but because it’s so obvious to us why using AI to write content for publication is a bad idea.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.