In the present times, the battlefield is no longer limited to physical borders, digital warfare has become the new frontier with videos, images, and messages being manipulated to spread misinformation, distort facts, and create outrage within minutes.
The rise of Artificial Intelligence (AI) tools has made this easier than ever, allowing fabricated content to appear convincingly real and reach millions in a matter of hours. There have been recent incidents to show how this digital manipulation is targeting public figures and sensitive events.
In one of such recent cases, a video of Board of Control for Cricket in India (BCCI) vice-president Rajeev Shukla was altered using AI and falsely broadcast on a Pakistani TV show, misrepresenting his views about the T20 World Cup situation.
The controversy erupted after a Pakistani cricket show, hosted by former skipper Shoaib Malik, broadcast an edited video of Shukla speaking about the upcoming T20 World Cup 2026 clash. It featured Shukla’s reaction to Pakistan’s U-turn on playing against India in Colombo. However, clearing the air, Shukla said the video was manipulated using AI and urged everyone not to believe such misleading content.
https://x.com/shuklarajiv/status/2021509280827163038?s=46
In another recent case, a video featuring Congress MP Shashi Tharoor surfaced that falsely attributed comments to him praising Pakistan’s diplomatic handling of its ICC Men’s T20 World Cup 2026 boycott.
The AI-generated video was reportedly shared on X by a Pakistani user and showed Tharoor saying, “I think how Pakistan played is indeed brilliant, I don’t know what they would do on field but what they did diplomatically is absolute brilliance, Indian Cricket Board was completely pinned. Hands down. This serves as a lesson that good diplomacy can make even a weak nation appear as a Goliath, Pakistan has done it and I want to appreciate them for it, it is unthinkable.”
https://x.com/ShashiTharoor/status/2021931884322898229
Tharoor came out and quipped that the video was fake, emphasizing that neither the language nor the voice belonged to him. In his post, Shashi Tharoor wrote, “Ai-generated “fake news,” and not even very good. Neither my language nor my voice.”
Earlier in 2025, like several other cases of deepfakes, a video of external affairs minister S Jaishankar apologising to the nation was reportedly circulating. The Press Information Bureau clarified that the video was AI-generated and was part of false propaganda.
Future Propaganda Warfare
In the near future, the way people receive and trust information is changing and that too increasingly fast. AI deepfakes, videos, audio, and images that look real but are completely fake can now make anyone say or do things they never actually did. This technology is no longer just a trick and can be used as a powerful tool in propaganda and information warfare.
Deepfakes can spread false messages, create confusion, and make it hard to know what is true, also they could be used to manipulate public opinion, discredit leaders, or influence elections and in such an era, simply seeing or hearing something doesn’t guarantee it’s real.
As AI grows more advanced, the line between truth and fiction will blur. Understanding, checking, and verifying information will become just as important as trusting traditional news sources.
Last year during Operation Sindoor amid rising tensions between India and Pakistan, PIB’s fact-checking unit constantly refuted several misleading claims circulated by Pakistani social media accounts.
In one such clarification, PIB dismissed rumors that the Bathinda airfield had been destroyed in a Pakistani attack, confirming that the airfield remains undamaged and is fully operational.
There were also claims that Pakistan had struck Indian military assets, such as the S‑400 air defence system or Rafale fighter jets, which were debunked as fake and in some cases based on unrelated images or old footage.
Then there were videos misidentified as Pakistani attacks on Indian military bases which were later found to be old or unrelated clips. Additionally, there were also false reports that an Indian Air Force pilot had been captured or that major Indian infrastructure had been hit.
The Pakistani side had intensified a misinformation campaign against India after India’s targeted missile strikes on nine terror camps in Pakistan and Pakistan-occupied Kashmir (PoK) under Operation Sindoor. The disinformation push came amid heightened cross-border tensions and aimed at countering the impact of India’s successful counterterror operations.
Impact on Both Ordinary and Elite
https://x.com/aravind/status/2021103038824579567?s=20
Misinformation and AI-driven manipulations are no longer just harmless online pranks, they have real, sometimes life-threatening consequences. Across the world, people have lost their lives due to false rumors, fake videos, or manipulated social media posts that incite panic, vigilante actions, or mob violence.
The damage affects both ordinary citizens and the elite, though in different ways. For the general public, misinformation can fuel fear, mistrust, and violent reactions within communities. For political leaders, celebrities, and other high-profile individuals, manipulated content can undermine credibility, distort political narratives, influence elections, or even escalate diplomatic tensions.
Regardless of status, the wider implications are profound, eroding public trust, weakening institutions, and creating a society where distinguishing truth from fabrication becomes increasingly difficult.
In India, for instance, false WhatsApp messages about child kidnappers have triggered mob lynchings, leading to the deaths of innocent individuals. Deepfake technology has also been constantly used to target women by digitally inserting their images into explicit sexual content, causing severe emotional distress, reputational harm, and long-term psychological trauma.
Govt Cracks Down on Deepfakes
In a decisive move to curb the spread of deepfakes, the Centre has strengthened India’s digital regulations by making it mandatory to label AI-generated content, ensure traceability, and require user declarations for such material.
The government has also significantly reduced the time allowed to remove unlawful content, from 36 hours to as little as three hours and has placed direct compliance responsibility on social media platforms and their senior officers.
For the first time, AI-generated material including deepfake videos, synthetic audio, and manipulated visuals has been brought under a formal regulatory framework through amendments to the IT intermediary rules. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 will come into force from February 20.
The revised rules also introduce a statutory definition of “synthetically generated information” (SGI), marking a major step toward clearly identifying and regulating AI-created content in India’s digital space.


























