Deepfake Menace: The Exigency for a Special Legislation in India

Deepfake Menace: The Exigency for a Special Legislation in India

Deepfake Menace: The Exigency for a Special Legislation in India (Image generated by TFI staff)

“Without corroboration, there is absolutely no way to know whether somebody’s memory is a real memory or a product of suggestion.” -Elizabeth Loftus.

In the last couple of months, several national and international news organisations have been covering extensively on the use of Artificial Intelligence (AI) and deepfake specifically during the last general election in India. This new age of technological disruption has been brought about by the rapid advancements in artificial intelligence (AI), with deepfakes emerging as a particularly concerning threat to India’s public discourse and information integrity.

The fact that deep fakes are frequently invisible to the unaided eye presents a significant obstacle, prompting governments and private platforms to create deepfake detection tools and guidelines regarding their use.

Today, anything can be impersonated in a deepfake content, such as a person’s face, a famous dialogue from a show, a background scene, a body, a voice, etc. For instance, many public figures have been tricked into saying or acting in ways they never would have in real life.

For example, recently several deepfake videos have targeted celebrities like Aamir Khan, Sachin Tendulkar, Ranveer Singh that made rounds on the internet, either for endorsing a specific political party, to manipulating voting ahead of general elections or endorsing gambling apps.

With the introduction and ease of use of deepfake technology, things will not always be as they seem, even though they offer new opportunities for education and learning, they have the ability to erode and undermine our faith in ordinary things. Due to the rapid pace of technological innovation, it becomes necessary to address this menace by bringing legislation and taking into consideration the global position in guiding the legal and ethical implication of deepfake.

The creation of deepfakes dates back to 2017, when a software developer on the Reddit website started sharing his creations, which involved swapping the faces of adult film artists for Hollywood celebrities. In 2018, comic actor Jordan Peele shared a deepfake video in which he mocked former US President Obama and warned of the dangers of deepfake media. By 2019, the use of deepfakes had gone viral, and the US House Intelligence Committee started holding hearings on the possible risks that deepfakes posed to national security.

Unfortunately, over the past five years, the use of deepfakes has skyrocketed due to their easy accessibility. Deepfakes are now frequent resulting in the creation and spread of false information and misunderstandings regarding significant state and non-state issues.

Legal Landscape Surrounding Deepfake in India – 

While deepfake content is not expressly prohibited by the current laws, but if it causes harm or violates any other laws pertaining to hate speech, video voyeurism, or impersonation it is subject to the Information and Technology Act, 2000 which provides for the punishment for cheating and personation by using computer resource, violation of privacy etc.

Despite lacking any specific legislation, the Ministry of Electronics and Information Technology (MeitY) released an advisory in November 2023 urging social media intermediaries (SMIs) to recognise and remove harmful deepfake content. And in the event the SMIs dont follow on this, they risk losing their immunity under the Act and being held liable for any harmful act caused by deepfake technology.

Additionally, MeitY released an advisory that prohibited the distribution of illegal content and required the labeling of AI models that were still in trial. The current Indian intellectual property rights (IPR) framework mainly protects works of literature, drama, music, and art created by human authors; it is ineffective in protecting AI-generated works. The Copyright Act, 1957 provides copyright protection against unauthorised use of works, allowing copyright owners to take legal action which implies that human intervention and human authorship are necessary for the protection under current copyright law.

In light of the recent 2024 Lok Sabha elections, several deepfake videos were circulated to push false narratives. Such acts could endanger social harmony and public order in addition to interfering with the electoral process. The Indian Penal Code, 1860 now Bharatiya Nyaya Sanhita 2023 provides for punishment for the offence of promoting enmity between different groups on grounds of religion, race, place of birth, residence, language etc and doing acts prejudicial to maintenance of harmony will be punished with imprisonment up to three years, or with fine, or with both.

It is noteworthy that despite any specific legislation governing deepfakes, the Indian judiciary has paved the path for limiting the misuse of deepfakes. For instance, the Delhi High Court, in the case of Anil Kapoor v. Simply Life India and Ors granted protection to the actor’s individual persona, and personal attributes against its misuse, specifically through AI tools for creating deepfakes. The court restricted sixteen entities from using the actor’s name, likeness, or image, and employing technological tools like AI for profit or commercial purposes.

Also Read: Delhi Election 2025: Exit Polls Predict BJP’s Return to Power after 27 years

Similarly, the renowned actor Mr. Amitabh Bachchan was granted an ad interim injunction in the case of Amitabh Bachchan v. Rajat Negi and Ors which prevented the unauthorised use of his personality rights and personal characteristics, including voice, name, image, and likeness, for commercial purposes.

Global Position on Deepfake

The US, Canada, Australia, China, Germany, India, and the European Union are among the twenty-nine nations that have placed their support behind the effort to stop the “catastrophic damage, whether intentional or accidental” that comes with the growing adoption of artificial intelligence.

The Bletchley Declaration outlines a course of action for international cooperation on the current and future hazards associated with artificial intelligence (AI) and includes an agenda item for identifying risks in the AI arena and adopting corresponding risk-based policies in many nations with the goal of enhancing transparency by having private companies create cutting-edge AI capabilities.

The rapid advancement of technology presents a significant challenge for fact-checkers and content moderators, who find it difficult to keep up with the spread of false information. It is difficult for fact-checkers in India to refute deepfake material when it is disseminated quickly and widely or during periods of high activity, like election seasons.

It is imperative that India address this exigency by creating a special legal framework that works with its information ecosystem to counter the various forms of manipulated media, ranging from detection algorithms for deepfakes to media literacy for cheap replicas. Such legal framework would safeguard free speech and beneficial innovation (such as creating generative AI tools that can assist with language barriers) in addition to successfully combating the use of dangerous deepfakes.

Exit mobile version