Now Reading
Can AI Be Better Journalists Than Humans in the Future?

Can AI Be Better Journalists Than Humans in the Future?

AI’s role in journalism has split the world in two, with one side, speculating that the technology could replace humans while the other side strongly believes that it is destined to be a helpful tool. 

By Michael Akuchie

The start of journalism can be traced to Rome circa 59 B.C., where news was documented and distributed in a news sheet called the Acta Diurna. Journalism has since grown into a multifaceted profession where people can opt to get updates in a print format like newspapers and non-print formats like radio and television, and more recently, digital media. 

The internet is the new frontier of journalism, with users able to break the news via text, photos, videos, or a combination of the three. Of course, there are ethical considerations with the internet, especially social media, as a medium for circulating news due to the growing rate of fake/unverified reports being passed off as authentic. 

Although the internet still enjoys the spotlight, Artificial Intelligence (AI) is set on becoming the next best innovation in journalism. No longer restricted to sci-fi or dystopian movies such as The Matrix (1999) and Ex Machina (2015), AI is gradually becoming a collaborative tool to compensate for human effort with everyday tasks such as writing texts. 

There is an ongoing polarising conversation about the role of AI in journalism, with some forecasting that technology could potentially replace reporters if it’s trained properly. Some, however, see AI as merely a handy tool for making tasks like news gathering and writing easier for writers and do not believe in the possibility of it replacing humans entirely. Let’s consider the current role of AI, and the impact it has had so far, perhaps then we can speculate on whether it could be a better journalist than humans. 

AI has enjoyed massive popularity over the past few years just like crypto and the Metaverse. In journalism, it aids reporters in various aspects of the job such as audio-to-text transcription, image generation, and content generation. 

With content generation, AI comes in handy in various ways. The writer can research a topic, gather insights from human experts, and then feed both materials to an AI content generator. For instance, I can ask ChatGPT, one of the most popular chatbots today, to provide an outline for an article on addiction recovery, incorporate quotes from experts in psychology, and then write it. Of course, one cannot overlook the ethical concerns regarding the accuracy of AI-generated content. Journalists can also now give the AI tool a prompt to generate a news article on a subject. However, this practice presents another challenge about whether or not the information can be trusted 100%. As such, they still require a set of human eyes to ensure the content generated can be published error-free. 

In November 2022, the popular tech news outlet CNET began publishing AI-generated news. However, the articles were error-ridden and had to be taken down as a result, following the discovery of the errors. Experts like Nathan Grayson, a Washington Post reporter, faulted CNET’s action by  saying “This is just the beginning and aggregation plus explanation performed by AI will doubtless result in lower-quality work and fewer jobs.” The controversy is fondly remembered for being one of the best instances of how trusting AI to do human work without supervision can cause problems.

Can AI Be Better Journalists Than Humans in the Future? | Afrocritik
Will AI replace humans is it a helpful tool in several human activities? | Rice Media

Another instance of journalists’ over-reliance on AI is closer to home. In January 2024, an essay published in the column of the Guardian Lifestyle Nigeria highlighted one of the biggest Nigerian movies to have been released in 2023. In the essay, the name of the actual director (Editi Effiong) of The Black Book (2023) was wrongly replaced with Kunle Afolayan, the director of numerous movies such as Swallow (2021) and October 1 (2014). In the same essay, the synopsis for the movie The Woman King (2022) was erroneously used for Jagun Jagun (2023)

See Also
Digital governance - Afrocritik

Although the article was taken down following outrage on social media, the fact that a version of the article with those glaring errors was published in the first place raised some important questions. For instance, the language used in the article also closely resembled that of the popular AI chatbot ChatGPT. If an AI tool was truly used to write that article, did it not make sense for the writer to state that fact somewhere in said article?  Also, why was the essay draft not thoroughly edited to check for errors? Doing so would have been considered a fine display of accountability and integrity. For example, financial news company Benzinga uses an in-house AI-driven content generation system to write news stories. It acknowledges this fact and goes further to state that each article generated by the AI tool underwent editing by human staff to align with their editorial guidelines. 

Despite its inability to guarantee 100% accurate information, it still has some advantages for journalists. For instance, it is incredibly cost and time–effective. Having a tool that can help generate news-worthy stories within minutes means that newsrooms can free up the time and resources that would have been used to write those stories from scratch. Of course, all AI-generated articles must be subjected to a wholesome editing process as demonstrated by Benzinga. AI tools can also handle the same tasks without complaining about monotony. 

On whether AI has the capabilities to do human work better and possibly replace them, it is safe to say that humans will keep their jobs longer. Writing an article, particularly an opinion piece, demands a human perspective. Sure, an AI-generated news story may contain flowery language that is good to read, but that doesn’t mean the technology can write articles the way a human would. Besides, AI is not entirely accurate as shown in the above examples. The data used to train the AI models is responsible for determining whether the stories it writes will be factual or not. 

Perhaps what newsrooms can do is emphasise more on a healthy collaboration between journalists and writers. For instance, this article was written with the aid of  Grammarly, an AI grammar checker. Before accepting or rejecting its suggestions, I examined them closely. If journalists can adopt a similar mindset to aspire for quality over everything else, then talk of AI taking the jobs of journalists will fizzle out. 

Michael Akuchie is a tech journalist with four years of experience covering cybersecurity, AI, automotive trends, and startups. He reads human-angle stories in his spare time. He’s on X (fka Twitter) as @Michael_Akuchie & michael_akuchie on Instagram

What's Your Reaction?
In Love
Not Sure

© 2024 All Rights Reserved.

Scroll To Top