Deepfakes As A Weapon Of Gendered Terror: Non-Consensual Synthetic Intimacy And The Systematic Harassment Of Women And Marginalised Communities

Author(s): Avani Bhatia

Paper Details: Volume 3, Issue 6

Citation: IJLSSS 3(6) 16

Page No: 148 – 156

ABSTRACT

India’s rapid development of generative artificial intelligence has been fostering an environment where deepfake technology has become an effective vehicle for gendered abuse. Deepfakes aimed at women would comprise the vast majority of manipulated content across the web by late 2025, and non-consensual sexual media would represent the vast majority of complaints submitted to national reporting systems. Using information provided by government organisations, independent research bodies, and technology-focused investigations, this paper focuses on synthetic media as a part of an overall pattern of digitally mediated gender-based violence. Although criminal activities against women in digital spaces have been experiencing an upward trending trend since 2020, deepfakes were emerging as a symbolic and also practical form of patriarchal domination. They enable humiliation, establish pathways to extort and perpetuate surveillance that disproportionately harm women and other marginalized communities. This article aggregates available evidence, recorded digital acts, the victims’ narratives and the socio-legal context to position deepfakes as a unique form of technologically enabled gender-based violence in the Indian context. The author finishes with a call for specific policies such as watermarking standards, platform accountability, caste-appropriate reporting platforms, and the criminalisation of synthetic sexual imagery, which are necessary for tackling this new type of online violence.

INTRODUCTION

Artificial Intelligence-generated manipulated media have significantly changed the digital environment in India. What was once an experimental technological novelty swiftly became a powerful and widely available weapon of harassment.

One of the most visible cases is the deepfake video with actress Rashmika Mandanna that convincingly replicated her face on someone else’s body. The video made headlines across platforms with news reports on its dissemination and dissemination of damaging content, demonstrating how quick harmful content can catch on and how quickly it can travel and how poor the mechanism for redress is for dealing with it at once will be. By 2025, situations similar to this one were no longer limited to high-profile personalities. Schoolgirls, college students, activists, journalists and private individuals became more and more targets.

This broader change is reflected in a sharp rise in cybercrimes against women in India since 2020. Deepfakes, once only a minor concern, have gained prominence, becoming the norm for complaints about altered sexual imagery and impersonation. This paper treats the deepfake crisis as not just a technology dilemma, but as a manifestation of gender-based control. Technology- enabled gender-based violence in India is closely linked to existing hierarchies patriarchal orders, caste-based discrimination, communal prejudices. The explosion of synthetic media has widened the domain for these violence perpetrators, which can now exploit digital anonymity, create manipulation of social narratives, and weaponize AI tools in ways that specifically affect women.

UNDERSTANDING DEEPFAKES

Definitions, Mechanisms, and Evolution of Deepfakes. Deepfakes are the product of a new category of media – deep learning, which is a technique that takes human appearance, voice and/or movement to a nearly unprecedented level and makes it into fake media. And at one end of deepfake are layered neural networks trained on extensive datasets of human faces and gestures. These networks learn to copy visual, sound and other symbols, and can therefore create artificially modified content that superficially looks authentic. The most widely utilized way of producing deepfakes currently is with an adversarial model where two algorithms are paired up to teach and test them one another simultaneously. One produces fake content and the other tries to find faults; the generator does better and better and better. Advances in model design and computational speed have led to substantial reductions in the amount of time required to make the generated content. Substantial processing power and technical expertise previously needed to create videos can be produced in minutes on consumer devices. Equally important is the availability of these tools. Because developing deepfake initially required knowledge of coding and graphics-processing techniques, contemporary interfaces ease over it and allow users with low tech know-how to create

realistic manipulations. As a result of this simple interface, audio deepfakes, face-swaps and synthetic nudes have all become popular.

INDIAN DEEPFAKE ECOSYSTEM AND ACCESSIBILITY

India has created an environment in which the cultivation of deepfakes can flourish. Cheap data together with the number of smartphones and high social media engagement have made the country particularly vulnerable. There are a lot of open source software such as DeepFaceLab, many diffusion-based software packages with simplified instructions to an Indian audience. Unsurprisingly, Telegram-based “nudify bots,” which are dedicated to creating explicit synthetic images, are causing alarm. In the region, these bots typically work in regional languages and are dressed up as cost-effective services that involve not more than a single photo. The payment systems such as instant UPI transfers or anonymous cryptocurrency transactions allow users to enter without leaving an identifiable fingerprint, making transactions much simpler. Research collectives and cybersecurity groups have discovered numerous clusters of Telegram channels and groups devoted to deepfake sharing via investigations. Such channels advertise platforms for “one- shot” creation of tools that can create synthetic, sexual images from a social media image, sometimes in seconds. Some of these groups also have explicit rules that promote the targeting of women and minors, and some even cite an ideological justification based on a moral imperative to honour, control, or misogynistic subculture narratives. This ecosystem also benefits from global forums. Compressed “celebrity packs,” requests for personalized synthetic images and tutorials specific to Indian targets spread around extensively. Much of this takes place among young urban male users who have access to fast internet-connected, computerized hardware suitable for gaming, and communities of people around the world willing to experiment with AI-generated content. These networks provide a glimpse at how technology meets with existing cultural biases and how the two can threaten a women’s space.

THE MAGNITUDE OF THE DEEPFAKE CRISIS IN 2025

Deepfake crimes are severely underreported, but available data at this moment reports a steep rise in complaints. The year-on-year number of cases shows that there is a sharp increase, especially from 2024 to 2025, reporting cases on a number of categories of tech-facilitated abuse. Most cases reported involve non-consensual imagery of women. Victims are from school-aged girls to

working professionals, most of whom are aged 18 to 30. Cases are most frequently reported in urban areas, which have a better internet penetration rate, and anecdotal reports indicate rural incidents are up in number, though they’re frequently not reported because of stigma and fear. Deepfakes also account for a significant share of technology-facilitated gender-based violence cases identified nationwide. The proliferation of AI tools, coupled with the stigma pertaining to sexualized images in Indian society, has created a setting where women are largely disempowered. A large number of victims do not seek legal help, for fear of blaming their victim, concern over delays in procedures and concern for the reputation of a community. Besides, India has a plethora of live deepfake marketplaces on encrypted platforms. Growing trends for visitors to sites featuring deepfake content indicate consumers demand more and are starting to want content that is originated by the AI. Its emergence underscores a burgeoning digital underworld in which synthetic sexual imagery moves through streets, with scant accountability to regulatory agencies.

POTENTIAL RISKS AND THE SEVERITY OF DEEPFAKES

Deepfakes as a National and Global Threat. Threat to Democracy’s Stability. In addition to gendered harassment, deepfakes threaten democratic processes. The political deepfake can shape public opinion, spread misinformation and erode their faith in electoral institutions. With its sprawling digital population, varied media consumption habits and soaring levels of polarisation, India’s digital environment results in a situation where manipulated political content can become viral in a day and age. There are cases from abroad that highlight the risks. Deepfake videos of political leaders have already been released in regions of worldwide conflict, which shows how artificial media can shape geopolitical conversations. With the elections of India being large, the potential for AI-generated misinformation to derail democratic discussions is rising. Effects on Trust and Information Systems of the public. Deepfakes also put the squeeze on newsrooms, fact- checking organizations and public institutions. It can become very difficult to tell real from fake content and that’s how trust in the visual evidence erodes. Even if they are proven to be fake, deepfakes can foster confusion and cynicism about the public. The challenge of verification also leads the press to suffer some delay, and engenders public skepticism. Effects on Women’s Safety and Reputation. Deepfake porn is especially dangerous because it turns sexualized images into weapons to shame, silence or even coerce women. Even after people are aware that a video is fake, the social and emotional damage rages on. The long-standing stigmas ingrained in Indian society

surrounding sexuality, family honour and women’s agency compound the harms. Victims frequently withdraw from digital sites, suffer harassment from peers or fall out of touch with their communities. Deepfakes act not only as a technological menace, but also as a significant social challenge, solidifying patriarchal narratives and restricting women’s public life participation.

LITERATURE REVIEW

And ever since the late 2010s, interest in the studies of deepfakes has soared. Initially, research focused on the impact that synthetic media had on privacy, misinformation, and political manipulation. Scholars cautioned that deepfakes might erode public trust in institutions, alter democratic processes and provoke a general confusion over the legitimacy of visual content. Simultaneously, gender-based and gendered factors around deepfake production were also pointed out by feminist researchers. The original deepfake pornography has almost always been directed at women, which means that it is not necessarily the case everywhere in the world. International organizations have also reported on the emergence of such “online misogyny,” with synthetic sexual imagery particularly pernicious. However, there is a paucity of India focused research. Studies frequently centre around related topics like cyberstalking, image morphing or revenge porn without scrutiny of deepfakes’ singularity. Few if any analyses of how caste, religion or regional identity intersect with that of gender in the context of digital abuse. Existing legal scholarship tends to criticize outdated frameworks that do not present strategies to address synthetic media. This article aims to bridge these gaps by incorporating socio-legal analysis, feminist theory and digital ethnography.

METHODOLOGY

This analysis employs a qualitative socio-legal lens to plot the terrain of deepfake in India. Data comes from national cybercrime complaint systems, reports from women’s commission groups, survey findings from gender and technology NGOs, and analyses carried out in policy research institutions. Digital ethnography is an important component to the analysis, using digital ethnography from social media sites such as Facebook, internet forums, encrypted messaging channels, user-generated content networks and user-generated communities. Such observations in the study provide further insight into patterns of behaviour and behavior by perpetrators as well as deepfake communities and distribution of deepfake content, and the process of spreading the content. Legal frameworks will be reviewed for the strengths and weaknesses of current law. The analysis looks at provisions in cyber legislation, new laws and proposed amendments that can assist in counteracting harm caused by AI. Inclusion of case studies to demonstrate an array of experiences across regions, occupations and socio-economic situations is useful. Despite data constraints especially in terms of underreporting and restricted access to closed online groups triangulation from multiple sources improves reliability. The narrative approach adds depth to the picture of the intersection of deepfakes with social norms, technological phenomena, and legal gaps.

CASE STUDIES & PSYCHOLOGICAL AND SOCIAL HARM

Deepfake featuring actress Rashmika Mandanna was a tipping point in India’s awareness of synthetic media. While not the first celebrity to be singled out, the incident underscored just how quickly and widely the attack spread. The video sparked a flood of remixes and copies and derivatives: to see how early, and how much, public discourse can normalize the violation of a woman’s digital identity.

Female journalists in India operate in a uniquely hostile digital environment, especially when covering political or social justice issues. Some reporters have already faced targeted explicit deepfakes as a punishment for their work. These manipulations are designed to discredit them as professional figures and mute their voices. The incidents demonstrate how deepfakes can be wielded strategically to stifle dissent and to keep women from raising their voices in public discussion.

The most disturbing of such cases involve schoolgirls in Kerala, whose faces were incorporated into explicit images shared with classmates and strangers. Kids can suffer serious psychological consequences in these situations. These incidents illustrate that deepfake technology doesn’t actually require adult agency and that it targets children just because their photos are easy to obtain online.

The case of Faridabad shows that deepfake abuse doesn’t stop at the immediate victim. One emotional toll eventually led to a tragic suicide when manipulated images of two sisters were used to blackmail their family. This case reveals the harsh reality of honour-based stigma in India, where a family’s reputation is often tied to women’s alleged purity in society.

Deepfake campaigns, which sought to silence women activists, particularly those associated with underprivileged communities, in particular are common. These cases illustrate how deepfakes and structural hierarchies are colliding, such as in the case of deepfakes and the use of these tools, exacerbating harassment of already vulnerable social structures.

The emotional impact of deepfake abuse is damaging. Similar to sexual assault, many victims suffer from trauma. Anxiety, depression and social withdrawal are widespread. For many women, fear of judgment or disbelief by family members add to the emotional distress. The stigma of sexual imagery whether or not it is authentic makes many victims avoid social media or public life and retreat from social media and the public. The effect can be even more pronounced in rural areas where mental health resources and the social and emotional cost could be even more severe because of lack of access to mental health services and increased cultural pressures. Even though the images used in deepfakes had been taken from private or restricted accounts, many victims are blamed for sharing their photos online. Blame is internalized leading to lasting psychological injury.

INTERSECTIONAL DIMENSIONS

Deepfakes don’t hit women the same way. Current social hierarchies contribute both to the character of that abuse and to its fallout. Muslim Women. Muslim women are often subjected to communalized misogyny online. Abuse in deepfakes targeting them often mixes in religious slurs, moral judgements, and narratives that suggest they are threats to cultural purity. This double marginalization as women and as individuals in a minority group increases their vulnerability. Dalit Women. Dalit women deal with humiliation and fetishization based on caste. Deepfake material regarding Dalit women might contain casteist narratives to reinforce social order. These attacks are intended to degrade not only individuals but communities. Adivasi Women. Women from the Adivasi who mobilise for land rights or for political rights have become known for being targeted with deepfake images used to intimidate them. The technology thus becomes a strategy for silencing voices in socio-environmental conflicts.

LEGAL AND REGULATORY FRAMEWORK

The current legislation in India has failed to keep up with the accelerating development of AI. There remains plenty of legal legislation that were written before deepfakes, providing loopholes

that abusers might take advantage of. While some portions of the IT law do address voyeurism, harassment and impersonation, these types of threats usually depend on the existence of physical images or direct physical acts. Deepfakes pose a new challenge: The damage to the victim is real, but the image is synthetic. Without legally defined categories, prosecutors struggle to obtain convictions. Proposal-level regulation has tried adding required AI labeling and swift takedown procedures, though enforcement is spotty. India does not have similar platform accountability policies that mandatory watermarking does. The lack of specific laws leaves deepfake victims to navigate laws covering many legal forms of digital crime, and the laws are not designed specifically to combat this kind of digital crime, making it a slow and agonizing exercise in learning, sometimes emotionally frustrating and slow.

ETHICAL AND TECHNICAL RESPONSE

Our detection technologies have come a long way, for better or worse, but they lag behind the generation of more sophisticated ways of detection. Now, some deepfakes have fewer visual inconsistencies and are a difficult cut above to pick up through traditional analysis. However, researchers suggest such solutions as cryptographic watermarking, verification of authenticity at the creation stage, and systems that track which metadata files are related to the history of digital content and how these are manipulated, for example. Structural changes are also necessary for ethical solutions in how platforms function. Social media companies might implement pre- publication AI screening to filter out non-consensual contents. Consent mechanisms, user- controlled privacy settings, and expedited reporting processes might curb the dissemination of harmful media. Such a larger cultural change is also required. No technological solution by itself can fix the root causes of deepfake abuse without confronting the social norms that stigmatize and criminalize women’s bodies.

RESEARCH GAPS

Emerging in scholarly interest, however, are a number of themes remaining unaddressed in the Indian literature:

Data on the gendering of deepfake abuse is scarce. The behaviours and motivations of Indian deepfake perpetrators are scarcely investigated. Intersections and vulnerabilities at play including caste, religion and region are under-focused and need investigation. Today a legal commentary does not have sufficient understanding of the problems generated with artificial imagery. More empirical research is needed into the psychological impact of abuse of deepfake. Few studies explore the economics of creating and distributing deepfakes. Regulations which take into account India’s distinctive technology and social circumstances as a whole still are weak. This article highlights these gaps are also identified to help drive more extensive, context-specific research.

CONCLUSION

Deepfakes in India are a lamentable meeting of technical expansion and structural social stratification. While the creative, benign goals of synthetic media can also be undermined by their misuse, it has led to a widespread threat to women’s autonomy, dignity and safety. The explosive growth of deepfakes from 2024 to 2025 illustrate how AI tools can be weaponized in societies where patriarchal norms continue to hold sway. Responding to deepfake abuse demands a multi- layered approach. Technological advancements such as watermarking and detection systems need to be accompanied by a legal reform that makes synthetic sexual imagery a crime. Platforms should play a larger role in response including stronger proactive safeguards and more responsive mechanisms. The systems of reporting must also acknowledge caste-based discrimination and deliver culturally appropriate help to victims. This will not be possible without changing attitudes towards gender, honour and morality, the country’s deeply rooted attitudes towards these issues will only work in partnership with governments to support India’s journey to establish a safer digital space in its digital evolution. As AI becomes more sophisticated and dynamic, the safety net should evolve to ensure safety by preventing misuse of the tool by others as well. A technology that will make women’s voices rather than silence them louder requires commitment, informed policy and collective awareness.

Scroll to Top