Election Commission must confront AI threats head-on

It is reassuring to see the chief election commissioner (CEC) address the challenge posed by artificial intelligence (AI) ahead of the upcoming election, which he described as "more dangerous than [conventional] weapons" due to its capacity to spread misinformation and disinformation. "It is now possible to circulate content using my exact image and voice," said AMM Nasir Uddin, calling the misuse of AI a "modern threat" capable of disrupting election campaigns and influencing the polls. He also mentioned other election-related challenges—such as the threat of illegal arms, restoring trust in the voting process, and ensuring voter turnout—but these are discussions for another day.
Knowing the problem doesn't guarantee that meaningful steps will follow, however. It is vital that the Election Commission implements effective safeguards including establishing robust monitoring mechanisms to detect AI-generated content, collaborating with experts and media houses to minimise its impact, updating legal frameworks, and raising public awareness. Currently, the electoral code of conduct lacks clear directives on this issue, which must be addressed. While conventional measures meant to ensure a level playing field are important, it is far more urgent now to curb the misuse of AI. This necessity is underscored by both global and local experiences, with a report by The New York Times revealing in June that AI was used in more than 80 percent of elections in 2024. AI has already played a role in at least nine major elections this year, it added.
With AI now making such fabrications easier and more convincing, the threat has multiplied. For example, a recent investigation by Dismislab catalogued 70 AI-generated political campaign videos, including reels, between June 18 and 28. These videos, created using Google's Veo text-to-video AI model, portrayed entirely fictional individuals (e.g. rickshaw drivers, garment workers, teachers, Hindu and Muslim women, young people, etc) offering endorsements for different political parties.
Locally, one may recall the circulation of fake content, including cloned voices of candidates, during recent elections. However, with AI now making such fabrications easier and more convincing, the threat has multiplied. For example, a recent investigation by Dismislab catalogued 70 AI-generated political campaign videos, including reels, between June 18 and 28. These videos, created using Google's Veo text-to-video AI model, portrayed entirely fictional individuals (e.g. rickshaw drivers, garment workers, teachers, Hindu and Muslim women, young people, etc) offering endorsements for different political parties. The initial waves of AI-generated messaging seemed to benefit Jamaat-e-Islami, but campaigners for rival parties like BNP and NCP are not far behind. The widespread circulation of such emotionally charged, synthetic content raises serious concerns about its disruptive effect as we near the election.
Clearly, we need better safeguards against this trend. While it is impossible to completely eliminate the threat of AI-generated content—nor is all such content produced with malicious intent—the EC must do all it can to limit its misuse with the help of relevant state agencies, political parties, and social media platforms. It is crucial to learn from the experiences of other countries where AI has already disrupted elections. Without swift, informed interventions, Bangladesh too risks seeing its much-awaited election marred by such technologies.
Comments