Header Ads Widget

Responsive Advertisement

2027 Polls and AI-mediated Threats: How Prepared is INEC?

2027 Polls and AI-mediated Threats: How Prepared is INEC?


On 16 March 2026, INEC held a strategic Information and Communication Technology (ICT) workshop in Lagos. The workshop focused on developing an Artificial Intelligence (AI) Regulatory Framework to guide the Commission’s adoption of emerging technologies in electoral administration. 


While INEC effort to integrate AI in Nigeria’s electoral processes is commendable, it is important that the Commission keeps an eye on the weaponization of AI. The threat from that sphere is perhaps bigger than we are imagining.   


The experience of recent years has shown that the acceptance of election results by our political parties is always uneven and strategic. When victorious, parties affirm the credibility of the process and the authority of INEC. When defeated, they frame the same institution as compromised and go ahead to contest the results in court of law and court of public opinion. 


As INEC makes its election plans, interested parties also go to work. For politicians, election warfare transcends canvassing voters and protecting ballots. Their preparations incorporate processes of undermining the integrity of elections. At the very core is the strategy—discredit the process if the outcome seems not to be matching expectations! And that takes us to the question of how AI is crucial to the outcomes of the 2027 elections.


Spreading rumours and doctoring images have always been part of our elections. However, what once required coordinated human effort can now be executed at scale, speed, and sophistication through AI-driven tools. I need not emphasise our ingenuity in exploiting technology. Only few countries can match us. So, what is it about AI deployment in the 2027 elections? 


Except regulations stand in their ways, parties will deploy AI to manipulate opinion polls and data to discourage supporters of opposing parties from coming out on election days. I see them using AI-powered bots to create the illusion of grassroots support even where they have scanty supporters. They will, as well, invent outrage against opposing parties. 


On election day, I foresee their “data boys and girls” using AI to generate highly realistic fake videos or audio. Candidates and electoral officials will appear to say or do things they never did. AI will be utilised to circulate fake “evidence” of ballot tampering or rigging and violence. Fake videos of vote-buying will be created and spread via social media. Press releases from INEC informing the public of “glitches” on its IREV will be circulated. 


In all, agent provocateurs will employ AI-generated content to claim fraud and victories where none exists. These are no conjectures; these are what I know we, Nigerians, are capable of doing!


Am I saying that instances of vote buying will not be recorded; that party thugs will not disrupt poll in some locations, or that INEC will not proclaim glitches? Not at all! Compromised election officials may yet tamper with Form EC8A, voters may yet be disenfranchised, and evidences of these infractions captured on cameras. Nothing should prevent such information from circulating on social media. 


Nonetheless, as much as we may want to employ the social media to police our elections, I think we should be bothered about the prospect of falsehoods being deliberately manufactured and disseminated to discredit the electoral process.


How prepared is INEC to respond in “real-time” to viral falsehoods during campaigns and on election day? A Commission working hard to integrate AI into electoral administration is likely to recognize the threats that AI usage could pose to election credibility. In case no plan is yet in place to address this emerging frontier, then INEC might have to gather its ICT staff in Lagos one more time for another workshop. 


If the Commission does not currently have data science experts and AI consultants in its payroll, now is the time to hire some. They can help detect deepfakes, synthetic media, and coordinated disinformation campaigns. INEC’s collaboration with companies like Google and Meta is also critical for content moderation, and flagging destructive AI-generated material.


The risk in erosion of a shared reality upon which democratic participation depends calls for anticipatory vigilance. INEC must shine its eyes! At any rate, reactive explanations are at best useless. If falsehoods circulate freely on election day, then INEC late rebuttal will only reflect the popular Naija slang: “You go explain tire, no evidence.” 


Unfortunately, many of us, invoking freedom of speech or access to information, believe there should be no consequences for manufacturing and spreading falsehoods. No! There must be sanctions. In case there is no legal frameworks for penalizing deliberate AI-driven disinformation, it is imperative that such laws are enacted before the election cycle intensifies to ensure electoral integrity.  


In a country, like ours, severely marked by extreme form of scepticism, a convincingly altered video of an electoral official, or a fabricated announcement of results that circulate minutes after polls close is enough to ignite confusion and delegitimise the electoral process in the eyes of citizens and the international community.


Ultimately, improving the credibility of election results takes more than policing the social media for AI-disinformation. There is the aspect of institutional credibility that demands strengthening and also consistency in enforcing rules. However, like rigging and every form of democratic subversion, AI-disinformation has the capacity to provoke nationwide unrest, and in a magnitude capable of triggering democratic backslide.


Jide Ololajulo, PhD writes from Abuja, FCT and can be reached @ babjid74@yahoo.com

Post a Comment

0 Comments