The Scale of Synthetic Deception
Cybersecurity analysis from Dr. Anil Rachamalla, Vice President at FourthSquare and co-founder of the Council for Digital Safety & Wellbeing, indicates that approximately 90% of Indian social media users now encounter synthetic content during routine browsing. These AI-generated assets, which include hyper-realistic faces and cloned voices, have moved beyond novelty to become a primary tool for digital deception. Rachamalla estimates that within a mere 10-minute window of scrolling, a typical user is likely to be exposed to misleading or manipulated media.
Financial Fraud and Targeted Manipulation
The weaponization of this technology frequently targets financial assets. Fraudsters utilize AI tools like ElevenLabs to replicate the voices of family members, initiating panic-driven transfers of funds. In other instances, deepfakes of prominent figures such as Amitabh Bachchan, Shahrukh Khan, and Rashmika Mandanna are deployed to endorse fraudulent investment schemes. These scams often funnel victims into encrypted platforms like WhatsApp or Telegram, where the recovery rate for stolen deposits is estimated to be less than 0.1%.
From Virtual Influencers to Social Instability
The normalization of synthetic personalities, such as the virtual influencer Kyra—who has secured partnerships with brands like Amazon Prime Video and boAt—has lowered public skepticism toward non-human digital entities. This shift is being exploited to create content that targets religious and communal sentiments, aiming to incite social tension. The phenomenon mirrors the psychological experimentation seen in media productions like the Spanish show Falso Amor, which uses AI-generated intimacy to provoke emotional responses from audiences.
The Global Reach of AI-Driven Extortion
Beyond financial loss, the technology is increasingly used for sextortion, where criminals generate explicit content from a single static image to blackmail victims. Unlike traditional fraud, which often relies on specific demographic vulnerabilities, AI-driven deception is platform-agnostic, requiring only that the victim possesses a smartphone. As these tools become more accessible, the barrier to entry for malicious actors has effectively vanished, creating a borderless threat that impacts users regardless of their digital literacy levels.
Source: Times Now.