Now Reading
Why AI Misinformation is a Serious Threat to Elections and Democracy

Why AI Misinformation is a Serious Threat to Elections and Democracy

Robot hand pressing a laptop jpg

Using AI tools to spread false information threatens the democracy of countries that practice that system of governance.  

By Michael Akuchie 

2023 was the year of artificial intelligence (AI), as many companies like Spotify, Microsoft, Grammarly, and Google bolstered their offerings with AI tools. It was also the year that ChatGPT, a creation of Open AI, stole the show with its ability to go beyond the capabilities of a search engine. Unlike the popular Google search, ChatGPT processes prompts using deep learning and responds conversationally, the way a human would typically do. While there are concerns about its accuracy, it is now a known fact that Chat GPT and other chatbots are enabling tools that make users more efficient.

Since its breakout year, the world has come to terms with AI’s potential applications. From helping scientists create new and more effective drugs to music creation, there is so much that AI technology can do. And therein lies the problem. With technology, especially the newer innovations, there is an ever-increasing worry that it can be used for unscrupulous activities. Consider the internet which among other things is meant to connect people living miles apart through instant messaging platforms and video conferencing solutions. Today, the internet is home to various crimes such as cyberbullying, advance fee fraud, child pornography, and more. The internet has also enabled political rivals to publish defamatory information about each other, especially during election season. While online troll blogs, X (formerly known as Twitter), and Facebook have long been exploited for misinformation, generative AI has recently proven to be a rather better medium. 

AI scaled
FILE PHOTO: AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. REUTERS/Dado Ruvic/Illustration/

Tech Target, a provider of online content and brand advertising, defines generative AI as a type of AI technology that can create diverse kinds of content, including audio, video, and images. One noteworthy feature of  AI-generated content is that they pass for authentic creations unless they are properly scrutinised with tools designed to spot fakes. 

Last year, Nigeria held elections for most of its political offices. A few weeks following the announcement of Bola Tinubu as the President, an audio recording seemed to place Peter Obi, one of Tinubu’s opponents in the election, at the centre of a controversial conversation. In the recording of a conversation with David Oyedepo, founder of the Living Faith Church Worldwide, Obi is quoted to have asked Oyedepo to help disseminate his campaign promises to Christians in the South West and certain parts of the North Central.  According to the recording, Obi said the election was a “religious war”. Obi’s campaign team debunked the authenticity of the recording, claiming that it was a deepfake audio intended to spark violence and discredit their candidate. Given Nigeria’s history with ethno-religious conflict which has claimed many lives and displaced others, such comments, whether true or not, can reignite a fresh wave of trouble. 

According to Heimdal Security, a software company, “deepfakes are fake video and audio footage of individuals that are meant to make them look like they have said and done things that, in fact, they haven’t.” While they are similar to photoshopped images, it is harder to detect a deep fake due to the sophisticated technology used to make them. As such, the chances of large-scale misinformation occurring are high. 

Robot hand pressing a laptop jpg
Source: Forbes

Last year, an AI-generated image of a large explosion near the Pentagon, the headquarters of the United States military, went viral. In the heat of the moment, many social media users reshared the image, causing it to reach more people. Eventually, it was discredited by Pentagon officials who said no such explosion occurred. While the incident was not real, the shockwave it sent across social media was far from unreal. The news of the unreal explosion caused stocks in the US market to dip, further justifying the claim that AI-powered misinformation can have real-life consequences. 

See Also
Brymo - Afrocritik

Fake News jpg
Source: Reuters

In 2024, AI is expected to take centre stage once again, with many organisations expected to invest heavily in the technology.  While this is welcoming,  given AI’s potential benefits in many industries, the worry that it will be used in further misinformation efforts grows stronger. 

It is election season in the US, Mali, South Africa, Tunisia, and a few other countries, and one can only wonder how politicians will use AI-driven propaganda to their benefit. Using AI tools to spread false information threatens the democracy of countries.  It could prompt voters to form untrue opinions about well-meaning candidates, thereby causing those with deceitful intentions to be elected. Last year, a video of Florida Governor Ron DeSantis dropping out from the Presidential race surfaced. While it was proven to be AI-generated, it further lends to the argument that governments must fast-track AI regulation or risk dealing with the potential consequences. Unfortunately, AI is developing faster than regulation, and that is worrisome. If legislators and concerned authorities do not step in with the appropriate laws, more people will be subjected to seemingly authentic content that could cause widespread uproar, and perhaps, violence. 

Michael Akuchie is a tech journalist with four years of experience covering cybersecurity, AI, automotive trends, and startups. He reads human-angle stories in his spare time. He’s on X (fka Twitter) as @Michael_Akuchie & michael_akuchie on Instagram.

Cover Photo: Forbes

What's Your Reaction?
In Love
Not Sure

© 2024 All Rights Reserved.

Scroll To Top