They have all been victims of “deep fake” videos, generated using artificial intelligence (AI). (PTI Photos/Reuters)
What do Prime Minister Narendra Modi, West Bengal Chief Minister Mamata Banerjee, Opposition leader Rahul Gandhi, singer Taylor Swift and actor Anil Kapoor have in common? They have all been victims of “deep fake” videos, generated using artificial intelligence (AI).
The Merriam-Webster dictionary defines a “deep fake” as an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said. And real life has already begun to mimic art when it comes to AI. Recently, the actor Scarlett Johansson alleged that her voice from the 2013 film Her was used without her consent by Open AI for the voice known as ‘Sky’ in its chatbot. Her’s protagonist falls in love with his phone’s AI, voiced by Johansson. In 2024, fiction has transformed into reality, with some changes to the plot line. When it comes to artistes and deep fakes, the causes and consequences in law revolve around the ownership of proprietary material — one’s reputation, fame, voice and person being used without permission or for malicious reasons.
However, given that this is election season in India and Delhi goes to the polls today, I would like to focus on the consequences — or lack thereof — of “deep fakes” for elections. The security and integrity of the electoral process has traditionally been premised on the integrity of the ballot box, the independence of the Election Commission of India (ECI) and accurate counting of every vote cast. Since 1951-52, when India held its first general election, this has been the focus of efforts to keep the process pristine. Now there is an additional challenge — the use of AI to influence the outcome. One facet of the use of AI is this phenomenon of “deep fakes”.
On May 6, the ECI issued an advisory to political parties on the “responsible and ethical use of social media in election campaigning”. It asked political parties to remove fake content within three hours of it coming to their notice.
The legal provisions available to address such deployment of deep fakes includes the Information Technology Act, 2000, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 and the Indian Penal Code, 1860.
Let us start with the oldest of these legal instruments — The Indian Penal Code, which provides three traditional remedies. One is Section 468, which deals with the forgery of a document or electronic record for the purposes of cheating. Another is Section 505, pertaining to the making, publishing and circulation of any statement, rumour or report with the intent to cause fear or alarm to the public. Both provisions were used to deal with alleged deep fakes purporting to be the Chief Minister of Uttar Pradesh Yogi Adityanath. Further, Section 416 of the Code criminalises cheating by personation, such as when an individual pretends to be some other person or knowingly substitutes one person for another or represents that he or any other person is a person other than who he is.
The Information Technology Act, 2000 has the potential to provide some redressal against deep fakes. Section 66 (c) provides that the sending of any electronic mail or message for the purpose of causing annoyance or deceiving or misleading the recipient will be punished with a term of up to three years in prison. Further, the Act, via sections 66 and 67, also punishes cheating by personation, the violation of privacy and the transmission of visual images or publication of images of a “private area” with imprisonment of up to three years. These legal provisions, while useful, do not necessarily provide comprehensive protection against the use of AI to generate misinformation, including deep fakes.
The existing legal regime also provides no remedy for attempts by hostile countries to influence electoral outcomes. In 2024, over half the planet is going to polls, including major democracies like India, the US and the UK. The Independent reports that British Home Secretary James Cleverly had warned in February that adversaries like Iran or Russia could generate content to sway voters in the elections that are scheduled to be held later in this year in Britain. He said that “increasingly, the battle of ideas and policies takes place in the ever changing and expanding digital sphere…The landscape it is inserted into needs its rules, transparency and safeguards for its users.”
In April, just before the commencement of the Indian general elections, the Microsoft Threat Analysis Centre (MTAC) had warned that China will “at a minimum, create, and amplify AI-generated content to benefit its interests” in elections in India, South Korea and the US. Last week, Forbes reported that Russia is looking to influence US opinion against Ukraine and NATO. It relies on MTAC analysis that found “at least 70 Russian actors using both traditional media and social media to spread Ukraine-related disinformation over the last two months” as a prelude to the upcoming presidential elections in the US. This AI-related campaign includes the use of deep fake videos.
The battle for the integrity of electoral systems and the formulation of informed public opinion has now been taken into the “virtual” world. This will necessarily entail a new legal understanding of what amounts to impersonation and misinformation. Europe’s Artificial Intelligence Act, 2024, which will come into force in June (discussed earlier in ‘A penal code for AI’, IE, March 16), offers some ideas on how to think about a new legal regime to address offences that include the generation of deep fakes whose goal is to “manipulate human behaviour”. Law reformers in India need to use the existing legal regime as a foundation to thoughtfully craft new laws that will address AI and deep fakes that look to influence electoral outcomes.
The writer is a Senior Advocate at the Supreme Court