As trust in the mainstream media keeps collapsing, there is no doubt there are many who are rejoicing at the prospects of AI replacing the human journalists they’ve grown to be suspicious of. As a journalist, I’ll no doubt be immediately accused of protecting my own job if I so much as dare criticize the looming overtaking of AI in my field. And yet, I have some cause for concern and cynicism.
There’s a fundamental difference in how I write vs how an AI does it. I process the stories and information I choose to share through my distinct perspective, emotion, life experiences, (hopefully) wisdom, personality, and creativity. An AI is unable to do so beyond mimicry.
That doesn’t mean that it’s not useful, especially for more technical and dry articles — so long as it doesn’t end up hallucinating facts, a problem which may or may not be possible to solve. But with a proper human editor who spends time properly fact-checking and verifying accuracy, this can be avoided. It’s also a great brainstorming tool that I have been using with increased frequency.
That said, there have been a number of incidents where publications have fallen into some unfortunate traps by relying too strongly on AI. CNET, which utilized AI writing tools on dozens of articles, had to issue corrections on a number of them. They even admitted that some of these errors have been substantial. The outlet has since halted the use of AI tools to generate stories.
Similarly, Microsoft had to issue a retraction on an AI-generated article that listed the Canadian food bank as a top tourist attraction in Ottawa. “Consider going into it on an empty stomach,” recommended the story.
This incident could have been easily avoided with a more careful editing process. But also, it’s hard to fathom a human writer—even a poor one—making the same error in judgement.
Less ethical companies using AI to generate content in order to boost their SEO scores are often not particularly concerned about publishing inaccurate or misleading content. This makes it harder than ever to know what to trust.
News outlets are also getting increasingly concerned around how AI systems like ChatGPT and Bard are using their content. They feel like their IP is being used as training material for these services without seeing any royalties or even a credit.
Have you tried asking ChatGPT to tell you how it arrived at its answer? It doesn’t like to show its work.
Sometimes the content that these AI tools spit out are either eerily similar to existing articles, or at times even match word for word.
Some claim that AI can help take bias out of journalism since machines aren’t biased like humans. But is this true? The AI is trained on specific datasets, like numerous articles, and other published work. This means that it is inherently subject to the biases of its training materials. Can this be minimized overtime by increasing the dataset and providing programming that seeks to reject biases? Perhaps.
Earlier this year, the European Parliament passed the draft AI Act, which supposedly should require LLMs to more seriously tackle the training datasets for bias, but there’s some potentially concerning phrasing around the outputs not perpetuating “existing societal inequities.”
Where AI can no doubt help reporters is with generating stories quickly when breaking news emerges. It can at least provide a template for the journalist to edit and reframe. Thus, critical stories can get out more quickly.
AI algorithms can also be used to analyze the preferences of the readers and customize stories for each individual. This can help increase engagement, however, my fear is that it would also keep readers in somewhat of an echo chamber since they will only be exposed to more of what they already like.
It can be a fantastic analytical tool, however, for figuring out what’s trending in the world.
The reality is that good journalism costs money to produce—because it requires time. The issue is that few people are prepared to pay for it these days—something that has always been a struggle, but especially so once media moved online, and much content began to be given away for free. Even the story you’re reading right now, which doesn’t rely on exhaustive reporting and interviews, took a lot of time. To produce it and share it for free, I’m essentially subsidizing it, as are my few paid subscribers (thank you!). It’s not sustainable at scale, however, so the temptation for outlets to minimize their human cost is significant.
Right now, in my view, AI is nowhere near good enough to replace journalists entirely. But it’s getting better and to ignore its impact and potential is a mistake. I suspect overtime it will indeed lead to a decrease in journalists employed. I think that particularly talented writers will be more highly valued, but when it comes to more basic straight-forward content and breaking news, newsrooms and outlets will rely on AI to do the bulk of the work, while humans will be hired to oversee the AI tools and edit.
In 2015, Narrative Science chief scientist Kristian Hammond has predicted that by 2030, 90% of news will be written by machines. "The journalists will not be generating stories from data. That unambiguous, not-open-to-interpretation stuff will be done by machines,” he told the BBC. "It means that the journalists can extend their reach. The world of news will expand," he said.
Given that this prediction was made 8 years ago, I wonder if Hammond still puts his estimate at such a high number?
My own prediction is that the human writers who will last in journalism will need to focus more on doing things that machines can’t: interviewing people, reporting live from the field, and using their built-in creativity, imagination, and emotional understanding of the world around them.
Ironically, it is our human “bias”—in a sense—that might just help us, human journalists, survive.
☕️ Please consider supporting my work by making a donation and buying me a coffee. Here’s how.
Pre-Order my book, No Apologies: How to Find and Free Your Voice in the Age of Outrage―Lessons for the Silenced Majority.
NOTE TO READERS:
Thank you for keeping me company. Although I try to make many posts public and available for free access, to ensure sustainability and future growth—if you can—please consider becoming a paid subscriber. In addition to supporting my work, it will also give you access to an archive of member-only posts. And if you’re already a paid subscriber, THANK YOU!
My biggest concern isn't that AI will be too unreliable to use, it's that we'll just shrug and use it anyway. When it comes to internet content, quantity crowds out quality. So please keep writing fellow human. Glad to have found your Substack!
Reluctantly following new Iowa law, the assistant superintendent of public instruction used ChatGPT to search controversial books, my YA novel among them, "to verify they contained descriptions of sex acts." Despite years of false claims, my novel comtains no sex acts (there are a few jokes about sex) and has been returned to Iowa book shelves. I don't think we've even begun to ponder the minor and major ways in which A.I. is going to go Terminator SkyNet on us.
https://www.nytimes.com/2023/09/01/opinion/book-ban-schools-iowa.html