Author(s): Kushagra Vats
Paper Details: Volume 3, Issue 4
Citation: IJLSSS 3(4) 12
Page No: 115 – 130
I. INTRODUCTION
“Code is law,” Lawrence Lessig famously asserted, reflecting the fact that in contemporary digital society, algorithms tend to have more control over human lives than legislatures or courts.[1] Artificial Intelligence (AI) previously the province of science fiction has now taken root in almost every aspect of public and private life. In India, artificial intelligence-based tools help banks assess creditworthiness,[2] automate transportation logistics, and are even being considered for legal research and sentencing analysis. These machine learning-driven and neural network-based systems develop through processing enormous datasets, frequently operating independently of immediate human oversight.
This emergence of autonomous decision-making systems reveals a fundamental gap in India’s legal framework. The majority of Indian laws are based on anthropocentric agency either human intention (as in criminal law) or human consent (as in contract law).[3] However, where an AI system has made a biased or mistaken decision such as denying a loan based on biased past data or misidentifying faces in facial recognition software there is no ready way to assign blame. The standard legal dichotomy of object and subject proves too narrow to deal with the nested agency of AI.
Furthermore, current frameworks of liability wrestle with the “many hands problem”, a situation in which responsibility is dispersed across various actors such as developers, data suppliers, users, and even regulators. Such dispersion is further amplified when the AI system is undergoing unsupervised learning to achieve emergent behaviour that no one programmed directly. In such a case, tort law’s emphasis on foreseeability and criminal law’s use of mens rea do not capture accountability effectively.
India is at a regulatory turning point. While nations like the European Union have tabled exhaustive legal proposals like the Artificial Intelligence Act⁸ and “electronic personhood” debates have become popular worldwide,[4] Indian law is mostly quiet. The newly tabled Digital India Act could offer a chance to implement a rights-oriented, technologically adaptive AI regulatory framework, but in what direction is unclear.
This paper seeks to bridge this doctrinal vacuum by critically engaging with the following questions: Firstly, Should AI systems in India be conferred limited legal personhood, akin to corporations or deities under Indian jurisprudence? Secondly, How should liability be allocated when AI operates autonomously. Thirdly, Can constitutional and tort principles be adapted to safeguard human dignity and assign legal responsibility in the age of machine agency? Through recourse to comparative legal evolution, Indian constitutional jurisprudence, and inter-disciplinary scholarship, this book suggests an equitable framework that harmonizes technological innovation with democratic accountability.
II. LEGAL STATUS QUO: AI AS A TOOL, NOT A LEGAL SUBJECT
Even as Artificial Intelligence behaviour becomes increasingly omnipresent in sectors of the Indian economy, the law still regards AI as nothing more than an object and an instrument of human agency or corporate apparatus instead of treating it as a quasi-autonomous entity. Indian legislation, rules, and judicial precedent are all anthropocentrically oriented, dealing only with human agents or statutorily acknowledged legal persons like corporations. This establishes a deep disjuncture between the law and technologically realized facts, especially when AI systems autonomously produce results with legal, financial, or social implications.
A. LEGISLATIVE FRAMEWORK
Indian law does not have AI as a legal person. The Information Technology Act, 2000, the main legislation controlling digital interactions, deals mainly with cybersecurity, intermediary liability, and data privacy.[5] Section 43A of the Act places civil liability for not safeguarding sensitive personal data but presumes that human actors (natural persons or companies) are the accountable actors.[6] Nowhere in the Act is a vision of autonomous software agents as accountable actors considered, even though algorithmic systems increasingly play a part in processing data and making decisions.
In the same manner, the Indian Contract Act, 1872 limits the capacity to contract to persons and juristic persons like companies or associations.[7] Section 10 of the Act sets forth the conditions for a valid contract, i.e., “free consent” and competency of parties. AI systems, without being conscious and having legal personality, cannot consent and intend, making them ineligible to enter into or fulfil contracts as principals. Even when AI performs contractual operations, the legal liability still rests on the human or corporate principal behind the system.
Under criminal law, the Indian Penal Code, 1860 is rooted in the doctrine of mens rea, the mental component of a crime.[8] Sections like 84 and 299 assume an intentional or reckless human action. As AI systems act without consciousness, intention, or knowledge of fault, they cannot be brought within traditional criminal liability. Although AI-caused harm might, in a less strained analysis, give rise to vicarious liability, such is a stretching of existing doctrines, especially when there is no good evidence of human fault.
These statutory constraints are particularly problematic in real-world cases where AI autonomously causes harm. In 2023, an Indian fintech leader used a machine learning model to assess loan proposals. Through use over time, the algorithm itself created discriminatory lending practices against vulnerable groups, not through programming but through biases within the training data.[9] Eventually, the company was held to account, but the lack of malicious intent or foreseeability was used to indicate the “accountability gap.” If liability can be assessed only on the human operator, even though the emergent behaviour of the AI, there is an inherent gap between causation and culpability.
B. JUDICIAL SILENCE
Indian jurisprudence has not yet evolved to address the legal consequences of autonomous AI systems. No reported judgment has explicitly considered whether AI can be held liable or whether harm caused by AI requires revisiting the principles of tort or criminal law. However, existing constitutional doctrine offers foundational guidance for future legal development.
In Justice K.S. Puttaswamy v. Union of India, the Supreme Court recognized the fundamental right to privacy and emphasized that technological systems must operate within the boundaries of constitutional values such as dignity, autonomy, and informational self-determination.[10] The Court cautioned against technological encroachments on civil liberties and underscored the need for “structural safeguards” to ensure responsible innovation.[11] Though the case did not concern AI specifically, its reasoning supports the extension of constitutional accountability to automated systems, especially those involved in governance, surveillance, or public service delivery.
The absence of a clear judicial or legislative position on AI has left stakeholders including regulators, businesses, and litigants without a coherent framework. This legal vacuum is unsustainable as India accelerates its adoption of AI in critical domains such as healthcare, agriculture, transportation, and policing. Without reforms, courts may soon be forced to adjudicate liability in cases where no human actor can be directly blamed, and existing doctrines offer little recourse to the injured party.
III. THE CASE FOR ELECTRONIC PERSONHOOD
The increasing autonomy of AI systems requires a reappraisal of conventional legal models that assume human agency as a prerequisite for liability. When AI agents act autonomously without immediate human oversight which is learning, adapting, and making decisions, have significant consequences conventional liability models fail. To meet this gap, some institutions and legal theorists, particularly the European Parliament, have advocated for an endowment of AI entities with some sort of electronic personhood.[12] This does not mean anthropomorphizing machines or reducing them to the level of humans but rather bestowing a legal personality adequate to affix obligations, liabilities, and, where appropriate, rights. The objective is practical: to fill legal loopholes and facilitate serious accountability in an age of bytes.
A. COMPARATIVE ANALOGIES
Indian law has already shown conceptual flexibility in granting legal personhood to non-human entities. These precedents form a jurisprudential basis for considering the same for some AI systems. Firstly, corporations have been considered artificial legal persons since long, with the capacity to own property, enter into contracts, and sue or be sued in their own right. In Tata Engineering & Locomotive Co. Ltd. v. State of Bihar, the Supreme Court underscored corporate personhood as a useful legal fiction to assign responsibility and rights.[13] If a body artificially created by incorporation or statute can have legal agency, the concept of conferring equal status on autonomous AI is not conceptually unthinkable.
Secondly, Hindu gods, despite their non-sentience, have been endowed with legal personality in Indian law. In Yogendra Nath Naskar v. C.I.T., the Court identified the juridical capacity of idols to hold property and represent themselves in court.[14] They are not granted such status by virtue of their physical existence but by virtue of their socio-religious role and the necessity for legal protection of their interests. Likewise, autonomous AI, which engages quite deeply with human affairs and can impact rights and obligations, can be accorded a derived personhood to control its operations legally.
Thirdly, in an ecologically important judgment, the Uttarakhand High Court held the Ganga and Yamuna rivers as juridical persons possessing rights and obligations.[15] In Mohd. Salim v. State of Uttarakhand, the Court referred to the need to conserve and safeguard essential environmental aspects, emphasizing that personhood is a juristic means to attain broader policy ends and not a metaphysical category. This broad interpretation of personhood as instrumental and not intrinsic paves the way for the same treatment for AI, particularly where attributing legal personality has the effect of promoting justice, deterrence, and accountability.
Thus, Indian legal history confirms that personhood exists as a legal fiction adopted for practical purposes. If idols and rivers may be granted personhood for the protection of public interest, there is space for jurisprudence to extend such recognition to sophisticated AI systems functioning in socially sensitive areas of healthcare, financial services, predictive policing, or automated surveillance.
B. SCOPE OF LEGAL PERSONALITY
The electronic personhood proposal is not a call for anthropomorphism. Instead, it is a call for a limited, instrumental type of legal status which similar to corporate personhood, that permits the law to assign obligations and assign liability efficiently. The term “smart agents” is used in EU policy writings to describe AI systems capable of executing legal acts like owning digital property, opening transactions, or creating outcomes with legal consequences.[16] Legal personhood for these agents would entail the following consequences:
i. Tort and Contractual Liability: Autonomous AI systems would be directly involved in lawsuits for injury resulting from their actions, without the requirement of establishing direct human intent or negligence. This would simplify litigation and establish clarity in responsibility in intricate algorithmic relationships.[17]
ii. Judicial Review of AI Decisions: Personhood could make it easy to have a straightforward statutory avenue for challenging AI-created decisions, notably in administrative law settings. For instance, if an AI system employed in welfare allocation refuses benefits based on erroneous reasoning, a claimant might apply for review against the system itself as a legal party.
iii. Liability Funds and Insurance: Modulated after corporate insurance patterns as well as car compensation schemes, an AI system may be supported by a liability fund or compulsory insurance.[18] Such a mechanism guarantees victim compensation even if direct responsibility cannot be assigned to a human developer or user.
Crucially, this partial personhood would not give civil or political rights—no vote, no marriage, no running for office. The notion is not to anthropomorphize AI but to establish legal framework where there isn’t any. This is the same tack taken in European Parliament Resolution 2015/2103(INL), which promoted “electronic personhood” for advanced robots with the highest-level decision-making abilities.[19]
Critics warn that such acknowledgment will blur boundaries between human and machine. But as Helen Nissenbaum suggests, legal accountability must keep pace with technological design if AI acts like an agent, legal frameworks have to treat it that way in order to have coherence and deterrence.⁹ In addition, by allocating bounded personhood, law can also impose duties such as disclosure requirements, ethical training data, and algorithmic transparency which cannot easily be enforced under the existing “AI-as-a-tool” paradigm.
IV. CONSTITUTIONAL, ETHICAL, AND HUMAN RIGHTS IMPLICATIONS
Granting legal personhood to AI might remedy regulatory and accountability shortcomings, but it also entices intricate constitutional and ethical challenges. In a legal framework such as India is based on the principles of dignity, autonomy, and equality whereas any expansion of legal recognition to non-human objects has to be balanced against the potential weakening of foundational rights protections and normative precision.
A. THE HUMAN RIGHTS PARADOX
Indian legal personhood is not just a technical designation but is inextricably tied up with normative conceptions of human dignity and moral agency. The Supreme Court, in Navtej Singh Johar v. Union of India, reiterated that constitutional morality is founded on the “primacy of individual autonomy and dignity” and is meant to safeguard marginalized and vulnerable groups.[20] Declaring AI a legal person, albeit a partial one potentially jeopardizes these values by blurring the parameters of personhood to include entities not subject to human suffering, consciousness, or moral agency.
This issue is especially pressing in India, where the social and legal environment is still grappling with deep-seated issues such as caste-based discrimination, gender inequality, digital illiteracy, and economic exclusion.[21] Under such circumstances, giving precedence to the recognition of machine agents while numerous human communities continue to fight for their recognition and protection may cause a moral dissonance. Additionally, the symbolic value of personhood that is frequently invoked to legitimate rights claims of oppressed communities, would be undermined if it is too easily applied to non-sentient things. As some writers such as Martha Nussbaum contend, personhood is not only a juridical category but also a moral marker based on vulnerability and empathetic imagination.[22] AI misses those characteristics and therefore ought not be equated with human or animal juridical subjects.
In addition, constitutional safeguards like Article 14 (equality) and Article 21 (right to life and dignity) have been evolved through liberal judicial interpretation to safeguard human interests in an evolving inclusive democracy.[23] Any framework that threatens to equally place AI under such protections must be dealt with extreme caution to prevent a slippery slope in which human suffering becomes legally comparable to computation error.
B. DATA BIAS AND DISCRIMINATION
Perhaps the most immediate danger that autonomous AI represents is the magnification of systematic prejudice. Because AI systems frequently use past data to learn, they will tend to mirror and perpetuate existing structures of inequality.[24] In the U.S., predictive policing software such as PredPol has been criticized for disproportionately identifying Black and Latino communities based on historically discriminatory crime data.[25] Facial recognition software has likewise been proven to work ineffectively on darker-skinned women. These results are not anomalies these are structural echoes of the data that AI learns from.
In India, the stakes are possibly even higher due to the interaction of caste, religion, region, and economic inequality. If AI software is applied in domains like loan sanctioning, predictive policing, welfare targeting, or public health resource allocation without vigilant scrutiny, it could institutionalize discrimination using the cover of efficiency and objectivity.[26]
To counteract such risks, legal protections need to be actively integrated into the AI governance system. The following are some crucial interventions:
- Mandatory Fairness Audits: AI systems, particularly those used in high-stakes areas, need to be regularly audited for bias, disparate impact, and explainability. These should be carried out by independent, heterogeneous bodies of experts and published.
- Human-in-the-Loop Needs: Decisions with social or legal implications that are welfare eligibility, law enforcement focus, or employment screening are required to include a human decision-maker who can overrule or contextualize AI suggestions.
- Consent and Transparency Requirements: AI systems under India’s soon-to-be-enacted Digital Personal Data Protection Act, 2023, will have to be developed with clear consent mechanisms for processing data and explainable outputs that allow people to dispute or appeal algorithmic determinations.[27] This is consonant with the doctrine of informational self-determination enforced in Justice K.S. Puttaswamy v. Union of India.[28]
These measures embody a people-first ethos, a human-centred ethos that prioritizes dignity, oversight, and accountability over algorithmic self-governance. As AI increasingly acts as the mediator of access to basic resources and opportunities, the legal system must be able to ensure that it acts within constitutional and ethical limits, not outside them.
V. GLOBAL APPROACHES TO AI REGULATION
With AI technologies developing at breakneck speed ahead of conventional governance patterns, jurisdictions everywhere have been taking varied approaches to governing AI. While some jurisdictions, the European Union for instance, are relying on full-fledged regulatory frameworks, others, including China and the United States, are going sector-specific or taking control-centric models. India, in the process of finalizing its Digital India Act, has the rare chance to opt selectively for the best practices from across the globe in consonance with its constitutional tenets and socio-economic realities.
A. THE EUROPEAN UNION
The Artificial Intelligence Act of the European Union (AIA), published in April 2021, is the most comprehensive and ambitious attempt to date to regulate AI using a single legislative tool.[29] The Act uses a risk-based categorization scheme, dividing AI systems into four levels:
Firstly, Unacceptable Risk: AI systems that are essentially incompatible with EU values like government social scoring, subliminal influence, or real-time remote biometric identification in public places are simply banned.[30]
Secondly, High Risk: These include AI use in vital infrastructure (transport, water supply), education, law enforcement, biometric identification, and credit scoring. They are allowed but under tight requirements such as transparency, human monitoring, precision, and cybersecurity standards.[31]
Thirdly, Limited Risk: AI systems like chatbots or emotion detection software have to meet obligations of transparency, e.g., telling users that they are dealing with an AI system.[32]
Fourthly, Minimal Risk: Products such as spam filters or video game AI are largely unrestricted, although designers are invited to adopt voluntary codes of practice.[33]
Significantly, the AIA introduces conformity assessments of high-risk AI, EU database registration that is compulsory, and the establishment of a European Artificial Intelligence Board for coordination. This is an integrated model based on the precautionary principle, with the EU stressing that rights that are basic such as non-discrimination and data protection are needed to be integrated into algorithmic design.[34] The Act is a human-centred, rights-based regulatory framework that India could follow in fields like finance, education, and policing.
B. THE UNITED STATES
The United States has avoided a centralized regulatory framework for AI and opted for a sector-based solution, where individual agencies regulate AI in their specific domain.
- Food and Drug Administration (FDA) : It oversees AI in medicine, particularly algorithms used for diagnosis, medical imaging, and robotic procedures.[35] It has also provided guidance to adaptive algorithms that learn through experience which is a vital consideration for post-deployment behaviour.
- The National Highway Traffic Safety Administration (NHTSA) regulates autonomous vehicles, producing safety guidelines and voluntary standards like the Automated Driving Systems 2.0: A Vision for Safety.[36]
- The Federal Trade Commission (FTC) has used its consumer protection law authority to sanction unfair or deceptive AI practices, particularly in data privacy and bias.[37]
In 2019, the Algorithmic Accountability Act was proposed to require impact assessments for high-risk automated systems, such as accuracy, fairness, bias, and privacy evaluations.[38] While it did not become law, an updated version was re-proposed in 2022 and is under legislative review. The decentralized American model facilitates innovation but has been criticized as lacking adequate uniformity, particularly as it applies to algorithmic discrimination and transparency standards. However, it shows how crucial agency-level experience is in adapting AI regulation to suit domain-specific threats, a lesson that India’s intricate administrative framework can draw upon.
C. CHINA
China has been following a top-down, state-led AI regulatory approach, blending AI into national development plans while exercising tight control. The New Generation Artificial Intelligence Development Plan (2017) maps a broad outline to transform China into a global AI leader by 2030.[39]
China’s most important regulatory actions include the Regulations on the Administration of Algorithmic Recommendations (2022), published by the Cyberspace Administration of China (CAC).[40] The regulations mandate: Firstly, Algorithmic Transparency: Platforms are required to provide the logic and effect of recommendation systems to regulators and users. Secondly, User Rights: Users should have the option to opt out from personalized content or algorithmic sorting. Thirdly, Fairness and Non-Discrimination: Algorithms should not be utilized for inducing addiction, the propagation of misinformation, or imposing discriminatory practices.[41]
In addition, China requires security audits of algorithms applied to sensitive sectors such as public opinion management, education, and financial services. The state-led approach guarantees human supervision but also implies censorship, surveillance, and absence of judicial appeal. For India, the Chinese model is likely to provide insight into centralized coordination and enforcement in real-time, although it would need considerable re-alignment to be compatible with India’s democratic and constitutional ethos.
VI. POLICY RECOMMENDATIONS
India is poised on a regulatory crossroads. As AI quickly penetrates sensitive domains ranging from judicial administration to welfare delivery, the legal chasm around its accountability is increasingly unstable. The pending Digital India Act offers a rare and timely window to construct an AI governing architecture anchored in constitutional values, technological vision, and international comparative models.[42] This document suggests a five-point roadmap with context-appropriate design for India.
1. RISK-BASED CLASSIFICATION OF AI
Implementing a tiered regulatory approach, like the European Union’s Artificial Intelligence Act, would enable commensurate regulation proportionate to the harms that an AI system has the potential to cause.[43] The classification proposed is as follows:
- Low-risk AI: Comprises tools like spellcheckers, customer support chatbots, or recommender systems. Such systems ought to be subject to minimal regulation beyond general data protection and transparency requirements.
- Medium-risk AI: Encompasses algorithms applied in HR hiring, education assessment, or risk analysis in finance. These should be audited on a periodic basis, should adhere to explainability requirements, and involve user disclosures to prevent indirect harm.
- High-risk AI: Includes systems applied in predictive policing, health diagnosis, credit assessment, face recognition, and distribution of public benefits. These need to undergo rigorous pre-deployment conformity audits, human oversight requirements, and legal responsibility frameworks.
Such functional differentiation guarantees regulation to be technology-neutral but context-sensitive to prevent overregulation and under protection.
2. CONDITIONAL LEGAL PERSONHOOD
Only a partial legal status should be given to those AI systems that show considerable autonomy and work in high-stakes contexts with little human intervention. Such a status would not imply full rights but provide for responsible assignment of obligations and liabilities.[44]
This framework would make possible, Firstly, Tort and Contractual Liability: AI systems could be made separately responsible in civil cases where injury results from their autonomous functioning. Secondly, Vicarious Liability of Developers and Deployers: Human and corporate entities involved in design, training data, and deployment plans may be held liable where foreseeability and causation may be reasonably made out. Thirdly, AI Compensation Schemes and Insurance: Just as the Motor Vehicles Act, 1988 requires no-fault insurance, developers of risky AI ought to contribute to liability schemes to compensate victims of algorithmic harm.
This model takes a middle ground filling the accountability gap without attributing human-like qualities to machines or blurring legal subjecthood and moral personhood.
3. MANDATORY HUMAN OVERSIGHT
No AI system must be permitted to take final or irreversible decisions in areas impacting core rights, including criminal justice, taxation, or surveillance. This rule finds its foundation in constitutional due process and the right to dignity under Article 21, as expounded in Justice K.S. Puttaswamy v. Union of India.[45]
Examples of required human oversight mechanisms include :
- Reviewable Algorithms: Legal provisions for the mandatory subject of AI decisions to human appeal, audit, or override.
- Human-in-the-Loop (HITL) Protocols: Implementing a competent human actor in key decision-making processes.
- Integrating human agency into AI deployment guarantees that technology-driven efficiency does not supplant constitutional accountability.
4. SET UP AN AI LIABILITY TRIBUNAL
India needs to create a specialized fast-track tribunal similar to the National Company Law Tribunal (NCLT), especially to hear AI-related disputes, harms, and compliance cases. Such as – (i) Hybrid Panels: The tribunal must have legal experts, computer science professionals, ethicists, and industry experts to facilitate multi-disciplinary decision-making, (ii) Expedited Redress: Victims of algorithmic harm should not be channeled through traditional civil courts, which could be short of technical competence or speed. (iii) Rulemaking and Guidelines: The tribunal may also promulgate industry-specific norms, similar to the way that subordinate legislation is framed by the SEBI and TRAI in the financial and telecom industries, respectively. This would increase regulatory clarity, provide judicial specialization, and help build public trust in AI regulation.
5. TORT LAW AMENDMENTS
Indian law of torts, while being predominantly uncodified, needs to adapt to the non-anthropocentric causality of AI. Such archaic classifications as negligence or strict liability will not do for autonomous systems whose behaviour is neither completely intentional nor completely predictable. The following changes are suggested:
- Algorithmic Negligence: Identifying failure to audit, train, or supervise AI as actionable negligence.
- Predictive Liability: Making developers liable where they have not avoided reasonably foreseeable harms from machine learning results or data biases.[46]
- No-Fault AI Insurance: Compulsory insurance programs paid by AI developers, like those under the Employees’ Compensation Act, 1923, would pay compensation to victims without establishing fault.
These reforms will close the current causal opacity gap so that courts can respond to AI-caused harm without stretching legal doctrine beyond its intended boundaries. India’s regulatory response needs to balance innovation with responsibility, efficiency with ethics, and technological advancement with constitutional values. The chance to regulate AI is not simply a legal duty but a democratic obligation. Learning from comparative models without losing ground to indigenous jurisprudence, India can and should develop a future-proof legal framework for AI regulation.
VII. CONCLUSION
Artificial Intelligence has transformed from a passive instrument into an active force of socio-legal impact. As AI systems become more autonomous making decisions that determine access to justice, healthcare, finance, and freedom, the Indian legal system can no longer be based on anthropocentric premises of responsibility and rights. The law must answer not simply by controlling new technologies, but by reimagining the elementary principles of liability, personhood, and justice. It is not a matter of giving rights to machines, but one of protecting human rights in the face of increasing automation. This fiction of law can bridge the accountability gap, simplify liability, and ensure the rule of law without attributing anthropomorphic qualities to AI. But this change needs to be steered by constitutional safeguards: the right to privacy, the guarantee of equality, the imperative of human dignity. As the Supreme Court reiterated in Justice K.S. Puttaswamy, advances in technology should not compromise civil liberties. In that vein, India should impose human-in-the-loop audits, infuse anti-discrimination protection, and empower institutions like a tribunal dedicated to AI to arbitrate arising disputes.
The law also has to adopt the thinking ahead approach, revising tort structures and legislative interpretation to address causality without intent, and damage without immediate fault. AI will keep developing but legal accountability cannot fall behind. Finally, the legal system needs to be careful that the emergence of machine autonomy does not undermine human responsibility. At the threshold of a digital constitutional moment, India needs to declare that the smartest system is not the one that computes best but the one that safeguards dignity, justice, and the rights of all.
[1] Lawrence Lessig, Code and Other Laws of Cyberspace 6 (1999).
[2] R. Srikanth, Artificial Intelligence in Indian Banking Sector: Challenges and Opportunities, 12 Int’l J. Comput. Sci. & Mgmt. Stud. 1, 2–3 (2020).
[3] Shruti Chaturvedi, Can AI Assist Judges in India?, The Print (Dec. 6, 2023)
[4] European Parliament, Civil Law Rules on Robotics, 2015/2103(INL), at ¶59 (2017).
[5] The Information Technology Act, No. 21 of 2000, § 43A, India Code (2000)
[6] Id
[7] Indian Contract Act, No. 9 of 1872, § 10, India Code (1872).
[8] Indian Penal Code, No. 45 of 1860, §§ 84, 299, India Code (1860).
[9] Vidhi Centre for Legal Policy, AI and Discriminatory Lending in India: Legal Challenges and Gaps (2023),
Available at : https://vidhilegalpolicy.in/research/ai-discriminatory-lending/.
[10] Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India).
[11] Id. at ¶180–81.
[12] European Parliament, Civil Law Rules on Robotics, 2015/2103(INL), at ¶59 (2017)
[13] Tata Eng’g & Locomotive Co. Ltd. v. State of Bihar, AIR 1965 SC 40 (India)
[14] Yogendra Nath Naskar v. C.I.T., AIR 1969 SC 1089 (India).
[15] Mohd. Salim v. State of Uttarakhand, AIR 2017 Utt 4 (India).
[16] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.
[17] Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts 65–89 (Springer 2013).
[18] Thomas Burri, Robots and Liability: Existing and Emerging Regimes in Europe, in Research Handbook on the Law of Artificial Intelligence 153–72 (Woodrow Barfield & Ugo Pagallo eds., 2018).
[19] Supra note 12.
[20] Navtej Singh Johar v. Union of India, (2018) 10 SCC 1 (India).
[21] Usha Ramanathan, Demographic Exceptionalism and Digital Identification in India, 2 Indian L. Rev. 1, 12–17 (2018).
[22] Martha Nussbaum, Frontiers of Justice: Disability, Nationality, Species Membership 159–60 (2006).
[23] Constitution of India arts. 14, 21; Maneka Gandhi v. Union of India, (1978) 1 SCC 248 (India).
[24] Kate Crawford, Atlas of AI 122–43 (2021).
[25] Rashida Richardson et al., Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, 94 N.Y.U. L. Rev. Online 15 (2019).
[26] Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, in Proc. Conf. Fairness, Accountability and Transparency 77–91 (2018).
[27] Digital Personal Data Protection Act, No. 22 of 2023, § 5, India Code (2023).
[28] Supra note 12.
[29] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.
[30] Id. art. 5.
[31] Id. art. 6–29.
[32] Id. art. 52
[33] Id. recitals 69–71.
[34] European Commission, FAQs on the EU Artificial Intelligence Act, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
[35] U.S. Food & Drug Admin., Artificial Intelligence and Machine Learning in Software as a Medical Device (2021), https://www.fda.gov/media/145022/download.
[36] U.S. Dep’t of Transp., Automated Driving Systems 2.0: A Vision for Safety (2017), https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety.
[37] Federal Trade Commission, Business Blog: Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI (Apr. 19, 2021), https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
[38] Algorithmic Accountability Act, H.R. 6580, 116th Cong. (2019); reintroduced as H.R. 6580, 117th Cong. (2022).
[39] State Council of China, New Generation Artificial Intelligence Development Plan (July 20, 2017).
[40] Cyberspace Admin. of China, Regulations on the Administration of Algorithmic Recommendations (2022), http://www.cac.gov.cn/2022-01/04/c_1642894606461270.htm.
[41] Id. arts. 7–12.
[42] Ministry of Electronics and Information Technology, Digital India Act Consultation, https://www.meity.gov.in/digital-india-act (last visited July 15, 2025).
[43] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final.
[44] European Parliament, Civil Law Rules on Robotics, 2015/2103(INL), at ¶59–63 (2017).
[45] Supra note 12.
[46] Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts 65–89 (Springer 2013).