In what has been proclaimed as a “super election year”, with 72 countries going to polls worldwide, the potential impact of AI (artificial intelligence) on democracy is a major concern. The 2024 general elections in India, too, served as a testing ground for the use of AI technologies in poll campaigns. Indian political parties were projected to have funneled huge sums of money into AI-generated content ahead of polls. In this context, it is incumbent upon us to evaluate the impact of AI on elections, and its massive potential for both bettering and constraining democracy.
The phenomenon of integration of AI into electoral processes is full of paradoxes. It offers significant advantages that can enhance the efficiency, security and transparency of democratic systems. From voter engagement to fraud detection, AI presents opportunities to modernise elections and address longstanding challenges. One of the most significant contributions of AI is its ability to streamline administrative processes. Election management requires the coordination of vast logistical tasks, including voter registration, ballot processing, and real-time monitoring of the movement of men and materials and working of polling stations. AI-powered systems can automate routine operations, such as verifying voter identities or updating registration databases, reducing human error and saving time.
AI, potentially guilty of creating false information, paradoxically, also plays a crucial role in combating disinformation. Elections are increasingly vulnerable to the spread of false information which can distort public opinion and undermine trust in democratic outcomes. Through natural language processing and machine learning algorithms, AI can identify and flag misleading or harmful content on social media platforms.
Fraud detection and prevention is another domain where AI demonstrates its value. Machine learning algorithms can analyse voting patterns and detect anomalies that may indicate electoral fraud such as multiple votes from the same individual or suspiciously high voter turnout in specific regions. In countries where electoral fraud has historically been a concern, AI can act as a safeguard, providing additional layers of scrutiny beyond manual oversight.
Furthermore, AI can enhance voter engagement by personalising interactions and increasing accessibility. Chatbots, for example, can assist citizens by answering questions about registration deadlines, polling locations and voting procedures. By improving efficiency, combating misinformation, preventing fraud and promoting greater accessibility, AI has the potential to strengthen the democratic process and foster more informed and inclusive elections.
The downside is equally important. Artificial intelligence introduces several challenges and risks that could undermine the integrity, fairness and transparency of elections. Without careful regulation and oversight, AI’s involvement in elections may produce negative consequences, including privacy violations, algorithmic bias creating distortions and public mistrust.
A primary concern is the risk of algorithmic bias. AI systems rely on historical data for training, and if this data is biased — whether along racial, socioeconomic or political lines — AI may replicate and even exacerbate those biases. For instance, automated voter registration systems or fraud detection tools might disproportionately flag certain groups due to biased data sets. Such outcomes could disenfranchise vulnerable communities, raising serious questions about the fairness of the electoral process. Additionally, the use of AI in elections introduces privacy concerns.
AI systems typically require access to large volumes of personal data to function effectively, such as voter registration details, social media behaviour, and geographic information. Without robust data protection frameworks, there is a risk that this data could be misused or fall into the wrong hands. The unauthorised use of voter information could facilitate targeted political manipulation, voter profiling or even identity theft, eroding public trust in both electoral institutions and AI technologies.
Transparency is another critical challenge. AI models — especially complex machine learning algorithms — often operate as “black boxes,” meaning their decision-making processes are not easily understandable. If AI tools are used to determine voter eligibility or identify fraudulent activities, but their working remains opaque, it becomes difficult to ensure accountability. This lack of transparency can lead to disputes about the credibility of election outcomes, further complicating post-election processes.
AI-driven disinformation is also a concern. While AI can help detect false information, it can simultaneously be exploited to generate sophisticated fake news, deepfakes, or misleading political advertisements. These AI-generated artefacts can be used maliciously to manipulate public opinion and spread misinformation, undermining voters’ ability to make informed choices.
In sum, the use of AI in elections presents serious risks alongside its advantages. If these challenges are not carefully managed, the introduction of AI into elections could have disastrous consequences. At the global level, the United Nations is promoting a comprehensive approach to AI governance, focusing on global standards, ethical guidelines, national strategies, risk mitigation, regulatory frameworks, skill development and public awareness. In March 2024, the UN General Assembly adopted a resolution — led by the US and backed by over 120 nations — calling for “safe, secure, and trustworthy” AI systems.
Regionally, the Council of the European Union approved the Artificial Intelligence (AI) Act in May 2024 to harmonise AI regulations. The Act balances innovation with transparency, accountability and rights protection. Set to take effect in 2026, it complements the EU’s broader regulatory agenda, including the Code of Practice on Disinformation, which mandates political ad monitoring, and the Digital Services Act. Together, these initiatives reflect a growing global effort to regulate AI responsibly.
Efforts to regulate AI in India remain inadequate, with companies accused of profiting from the spread of hate speech. Research by The London Story and India Civil Watch International found that Meta permitted political ads promoting Islamophobia, Hindu supremacist rhetoric and calls for violence. YouTube failed to block ads containing misinformation and inflammatory content.
While the Information Technology Act, 2000, governs online platforms, the Election Commission of India (ECI) oversees communications during elections. However, the ECI, while mobilising social media platforms in its flagship programme, SVEEP, for voter education, has struggled to regulate social media effectively, as platforms’ erratic adherence to a voluntary code of ethics limits enforcement.
Since India currently lacks AI-specific legislation, the rise of deepfakes has heightened demands for regulation. In July 2024, reports indicated that the Ministry of Electronics and IT is drafting a new AI-focused law. This legislation may require social media platforms to label AI-generated content, to enhance transparency and curb manipulation. There is no doubt that safety mechanisms are needed on a war footing.
The writer is former Chief Election Commissioner of India and author of India’s Experiment with Democracy — the Life of a Nation through its Elections