Author(s): Raj Patel and Sweety Haldar
Paper Details: Volume 3, Issue 4
Citation: IJLSSS 3(4) 16
Page No: 168 – 175
ABSTRACT
As Artificial Intelligence (AI) continues to redefine the digital and legal landscapes, a pressing challenge emerges: how to balance technological advancement with constitutional justice. This paper critically examines the tension between AI-driven innovation and the protection of fundamental rights enshrined in the Indian Constitution, particularly Article 21. From deepfake abuse and algorithmic discrimination to mass surveillance and biased recruitment tools, the growing influence of AI raises serious concerns about dignity, privacy, livelihood, and free expression. Through case studies like Mobley v. Workday, Italy’s fine on OpenAI, and the Rashmika Mandanna deepfake incident, this paper reveals the urgent need for ethical governance and legal safeguards. The discourse further explores how AI can be both a threat and a tool, highlighting the potential of privacy-preserving technologies and judicial automation. Ultimately, this article calls for a forward-looking legal framework that upholds justice, fosters transparency, and ensures AI works for the people, not against them.
KEYWORDS: Artificial Intelligence and Law, Constitutional Rights, Privacy and Surveillance, Deepfake and Misinformation, Algorithmic Bias and Fair Trial.
INTRODUCTION
As Artificial Intelligence (AI) reshapes modern society, the delicate balance between alteration and justice hangs in the balance. Artificial intelligence (AI) already plays a role in many decisions that affect our daily lives. Still, the increasing dependency on Artificial Intelligence in governance, law enforcement, and judicial decision-making raises profound questions about the future of justice, equality, human rights, and constitutional safeguards. The rapid advancement of AI technologies in India has brought about serious concerns regarding human rights. Women, children, migrants, and refugees have been particularly affected, often facing challenges such as bias, discrimination, and violations of privacy. These issues highlight the growing need for responsible AI use and stronger safeguards to protect vulnerable communities.
However, fundamental rights play a key role in the debate on the regulation of Artificial intelligence. AI has significantly impacted the fundamental rights guaranteed under Article 21[1] of the Indian Constitution. The Right to Privacy and The Right to Live with Dignity, the Right to a Fair Trial, the Right to Freedom of Speech and Expression[2]. The right to Privacy under Article 21 is infringed by AI technologies, such as surveillance tools and data collection systems, which often infringe on privacy by collecting personal information without consent, monitoring people’s actions, and storing sensitive data without proper safeguards.
Deepfake is one of the examples of video, which are AI-generated videos that manipulate real footage to create hyper-realistic but fabricated content, have sparked major concerns due to their potential for spreading misinformation, invading privacy, and being used for harmful purposes. They can create incredibly realistic but fake scenarios, making it hard to tell what’s real and what’s not. This poses serious risks for people and society, especially when used to manipulate politics, harass individuals, or spread false information.[3] “Right to Live with Dignity”: The widespread use of AI in decision-making processes, particularly in employment, healthcare, and social services, can negatively impact people’s dignity and livelihood. For instance, biased AI in hiring or automated systems determining access to welfare can create inequality and limit opportunities. “Right to Freedom of Expression”: AI’s role in moderating online content or influencing social media algorithms can limit free speech by controlling what information people see and share, potentially narrowing the diversity of opinions and stifling public debate. “The Right to a Fair Trial”: AI tools used in the legal system, like risk assessments or predictive policing, can introduce biases that affect the fairness of legal decisions, leading to unjust outcomes for individuals.
It is often argued that technological advancements bring about new risks, such as privacy, data protection, and freedom of expression, among others. But artificial Intelligence systems in particular can also enhance and support the protection of fundamental rights, such as in the case of privacy-preserving technologies. Preserving technologies that meet diverse societal needs—like sharing information while protecting privacy—is essential for maintaining balance in our modern world.
LIFE AND PERSONAL LIBERTY
Article 21[4] of the Constitution of India states that no person can be deprived of their life or personal liberty, except following a procedure established by law. It also includes the right to privacy, the Right to live with dignity, Right to livelihood.[5] A recent case which has happened due to the misuse of AI, Italy’s Fine Against OpenAI in December 2024, Italy’s privacy watchdog fined OpenAI for ChatGPT’s violations in collecting users’ data. In December 2024, Italy’s privacy watchdog, “Garante”, imposed a €15 million fine on OpenAI, citing serious breaches of privacy rights through its ChatGPT platform. The watchdog found that OpenAI had collected and processed personal data without proper consent or legal justification, violating the core principles of privacy. The case highlighted how personal information-potentially including private conversations and sensitive data, was being used to train AI models without users being aware or informed. The lack of transparency and accountability, coupled with insufficient safeguards to verify the age of users (to protect minors), raised concerns about how such powerful AI technologies could misuse personal data. This incident underscored the need for stricter regulations to ensure AI respects individuals’ privacy, prompting broader discussions about how data should be handled in the AI era. It also served as a wake-up call for companies to adopt more ethical practices in data collection and processing.
Similarly, in the US states balk at unusual Clearview AI privacy settlements. In this case, Clearview AI, a company known for its controversial facial recognition technology, has faced mounting legal challenges in the United States over its use of personal data. The company created a vast database by scraping billions of images from social media and other websites without users’ consent. These images were then used to power its facial recognition tools, which it sold to law enforcement and private entities. The practice sparked outrage because it infringed on people’s right to privacy. Individuals had no control over how their images were collected, stored, or used, and many were unaware that their photos were part of such a database. Critics argue that Clearview AI’s actions amount to mass surveillance, with serious implications for personal freedom and anonymity in public spaces.
In a recent settlement proposal, Clearview AI promised to limit its services to government agencies, but many states rejected the deal, saying it didn’t go far enough to address privacy concerns. This case highlights how AI can be misused to violate fundamental rights and emphasizes the urgent need for stronger regulations to protect individuals from unauthorized data exploitation.
INDIVIDUAL’S DIGNITY AND AI
The right to live with dignity, enshrined in Article 21[6] of the Indian Constitution and recognized globally, is one of the most fundamental aspects of human rights. However, the rise of artificial intelligence (AI) has brought significant challenges to this principle, often infringing on individuals’ dignity in subtle but far-reaching ways.[7] It has seen an infringement of dignity in the recent shocking case of a UK soldier sentenced to prison for posting deepfake pics of his ex-wife, other women on porn websites. Jonathan Bates, a former Royal Air Force veteran, was sentenced to five years in prison for creating and distributing explicit deepfake images of his ex-wife and three other women without their consent. Bates used artificial intelligence to superimpose the faces of the victims onto sexually explicit images, making them appear authentic. These manipulated images were then shared on adult websites, subjecting the victims to harassment, humiliation, and immense emotional distress. The court heard how Bates’ actions caused significant harm to the victims’ personal and professional lives, with some facing public embarrassment and damage to their reputations. His ex-wife, in particular, reported feeling unsafe and betrayed, as someone she once trusted had used advanced technology to invade her privacy in such a devastating way.
The case underscores the dark potential of AI when misused and highlights the urgent need for stricter regulations and penalties to prevent such abuses. The judge, in delivering the sentence, called Bates’ actions “a gross violation of trust and privacy,” emphasizing the importance of holding individuals accountable for using technology to harm others. This landmark case serves as a stark reminder of the dangers posed by deepfake technology when placed in the wrong hands.
Another case. According to a report by The Times of India, the Rashmika Mandanna deepfake case highlights the alarming misuse of AI technology to infringe upon personal privacy and dignity.[8]. In November 2023, a case emerged involving popular Indian actress Rashmika Mandanna, where a deepfake video falsely portraying her in a compromising situation went viral on social media. The video, created using artificial intelligence, manipulated her face onto explicit content, causing significant emotional and reputational harm to the actress. The situation quickly drew attention to the dangers of deepfake technology, which allows the creation of highly realistic but fake videos. This technology, while innovative, has increasingly been misused to target public figures, especially women, for malicious purposes. Rashmika Mandanna’s case became a glaring example of how such tools can be weaponized to attack someone’s privacy and dignity. The Delhi Police acted promptly, registering a First Information Report (FIR) and launching an investigation into the incident. After thorough efforts, they apprehended the primary accused, who confessed to creating and circulating the video. According to reports from the Times of India, the accused admitted that his motive was to gain popularity and followers on social media by sensationalizing the fake content.
Rashmika Mandanna, known for her work in Indian cinema, remained vocal about the incident, emphasizing the need for stronger laws and public awareness to combat the misuse of technology. She expressed her distress at the invasive nature of the video and the larger issue of women being disproportionately targeted by such malicious acts. This case highlights the growing challenges posed by AI-driven deepfake technology. It serves as a wake-up call for stronger legal frameworks to protect individuals from digital exploitation and underscores the importance of ethical standards in technological advancements. The Times of India’s coverage of this case shed light on the societal and legal implications of deepfake misuse, making it a crucial example for research on privacy and technology ethics.
LIVELIHOOD: HUMAN LABOUR AND AI
AI has the potential to significantly impact the right to livelihood, particularly when its implementation replaces human labour or creates systemic biases in employment practices. Here are a few ways AI can infringe upon this fundamental right: Job Displacement and bias in Recruitment.
THE LANDMARK CASE OF MOBLEY V. WORKDAY (2024):
Mobley v. Workday (2024)[9] It is a case that highlights how artificial intelligence (AI) can inadvertently cause discrimination in hiring. In this case, a job applicant (Mobley) filed a lawsuit against Workday, a company that provides AI-driven recruitment software. Mobley alleged that the AI used by Workday was biased and unfairly discriminated against certain groups, including people based on race, age, or disability. Essentially, the AI system screened out candidates with certain characteristics, limiting their chances of being hired, even if they were qualified for the job. The court ruled that companies providing AI-based recruitment services, like Workday, could be held legally responsible for discrimination if their algorithms were found to be biased. This decision is significant because it emphasizes that companies using AI must ensure their systems are fair, unbiased, and compliant with anti-discrimination laws. The case serves as an important reminder of the potential risks of using AI in hiring and the need for strict oversight to prevent such systems from infringing on individuals’ right to equal employment opportunities. Many more such cases underscore the importance of implementing ethical guidelines and regulatory frameworks to ensure AI technologies do not infringe upon individuals’ rights to fair employment and livelihood.
In Article 19 (1)(a)[10]The right to freedom of speech and expression, Article 19(1)(a) of the Indian Constitution guarantees the fundamental right to freedom of speech and expression. However, the unchecked use of artificial intelligence (AI) can sometimes infringe upon this right in the following ways: Censorship by AI Algorithms, Mass Surveillance and Self-Censorship, Spread of Misinformation.
CENSORSHIP BY AI ALGORITHMS
Social media platforms and other digital services rely on AI to moderate content, identifying and removing posts deemed harmful or inappropriate. However, these systems are not foolproof. They often misinterpret satire, dissent, or even legitimate opinions as rule violations, leading to the wrongful deletion of content. This restricts people’s ability to express themselves freely online. For instance, activists and journalists have reported cases where their posts on sensitive political or social issues were flagged or removed by AI, limiting their voice and stifling important discussions.
In the case of The Shreya Singhal v. Union of India (2015)[11] The case is one of the most important decisions by the Indian Supreme Court, where it struck down Section 66A of the Information Technology Act, 2000, as unconstitutional. This case became a milestone in protecting free speech in India, especially in the digital era. The case began after two women were arrested for posting comments on Facebook criticizing a bandh (strike) in Mumbai following the death of a political leader. Their posts were considered “offensive,” and the police invoked Section 66A of the IT Act.[12] To justify the arrests. This law criminalized sending messages online that were “grossly offensive” or caused “annoyance.” However, the wording of the law was so vague that it left room for misuse. Shreya Singhal, a young law student, took the matter to court. She argued that the law violated the fundamental right to freedom of speech and expression guaranteed under Article 19(1)(a) of the Indian Constitution.
The Supreme Court struck down Section 66A, declaring it unconstitutional. The key reasons were that the Court said that Section 66A used terms like “grossly offensive,” “annoying,” or “inconvenient,” which were not clearly defined. Because of this, the law could be interpreted in many ways, leading to misuse. For instance, someone could be arrested just because their opinion annoyed someone else. The Court emphasized that the right to free speech means people have the freedom to express their opinions, even if those opinions are unpopular, critical, or offensive to some. Just because something annoys or offends someone doesn’t mean it should be censored or punished. Democracy thrives on open discussions and debates. The Constitution allows some restrictions on free speech, like maintaining public order, decency, or national security (under Article 19(2)). But the Court ruled that Section 66A didn’t fit into these reasonable restrictions. Instead, it went too far by criminalizing even harmless or subjective opinions.
The Court pointed out that Section 66A had been misused by authorities to silence dissent and arrest people for minor comments on social media. This kind of misuse went against the democratic principle of free expression. This decision ensured that the internet remained a platform for free expression while setting a strong example of safeguarding democratic values.
AI-powered surveillance tools allow governments and corporations to monitor people’s online and offline activities extensively. This constant surveillance creates a chilling effect—people, knowing they are being watched, might hesitate to express their views, especially on controversial topics, fearing repercussions. In some regions, facial recognition systems powered by AI have been used to track protesters, discouraging others from joining demonstrations or openly voicing.
CONCLUSION
Artificial Intelligence is no longer something far away; it’s already part of our everyday lives. While it promises progress, it also raises tough questions about how much we want technology to influence our rights and freedoms. This journal shows that although AI can help us work better and faster, it can also risk our privacy, free speech, and dignity if we don’t handle it carefully.
India’s Constitution has always stood as a shield for individual rights, and this must not change in the face of new technology. These challenges of privacy intrusions, biased algorithms, misinformation through deepfakes, and threats to livelihood and dignity demand a proactive and nuanced response.
We need stronger laws, clear safeguards, and practical checks to make sure AI works for people, not the other way around, and does not harm our rights. Therefore, Open public discussion is also essential about the AI risk to keep our rights safe as the world changes around us.
At the same time, AI indeed has positive uses. For example, it can help protect privacy by using secure data methods, it can help solve crimes faster, and it can even make courts work more smoothly. So we should not stop the progress of AI. Instead, we need to find the right balance between using technology and protecting people’s rights. For this, we need strong and clear laws that guide how AI should be used. Companies that use AI must follow strict rules about privacy, fairness, and transparency.
The government should also ensure that citizens understand both their rights and the risks linked to AI. Effective checks must be in place to prevent misuse, and strict penalties should deter violations. For example, Courts, lawmakers, and citizens must work together to update outdated laws and ensure that AI benefits everyone. Awareness among people is important. People should know how AI works, how it collects data, and how they can protect their privacy and rights.
[1] India Const. art. 21.
[2] India Const. art.19, cl. 1(a).
[3] Aaratrika Bhaumika, Regulating Deepfakes and Generative AI in India Explained, The Hindu, December 04, 2023. https://apnews.com/article/italy-privacy-authority-openai-chatgpt-fine-6760575ae7a29a1dd22cc666f49e605f
[4] India Const. art. 21.
[5] Gaida Zampano, Italy’s privacy watchdog fines open AI for ChatGPT’s violations in collecting users personal data, AP News ( 20 December, 2024, 9:04 PM), https://apnews.com/article/italy-privacy-authority-openai-chatgpt-fine-6760575ae7a29a1dd22cc666f49e605f .
[6] India Const. art. 21.
[7] Anna Young, UK soldier sentenced to prison for posting deepfake pics of ex-wife, other women on porn websites, New York Post ( 02 JANUARY, 2025, 6:57 PM) https://nypost.com/2025/01/02/world-news/uk-soldier-sentenced-to-prison-for-posting-sexually-explicit-deepfake-pics-of-women-on-porn-sites/ .
[8] E times.in ( 20 JANUARY, 2024 ) https://timesofindia.indiatimes.com/entertainment/hindi/bollywood/news/rashmika-mandanna-deepfake-case-main-accused-arrested-by-delhi-police/articleshow/107009990.cms
[9] Mobley v. Workday, No. ___ (D. Cal. 2024). https://www.law360.com/cases/63f751a155df6803e19d4d11/articles?page=2
[10] India Const. art.19, cl. 1(a).
[11] Shreya Singhal v. Union of India, (2015) 5 SCC 1.
[12] Information Technology Act, No. 21 of 2000, § 66A (India).