Author(s): Adv. R. Christson
Paper Details: Volume 3, Issue 6
Citation: IJLSSS 3(6) 18
Page No: 165 – 177
ABSTRACT
AI is one of the most significant technologies that is disrupting dozens of sectors currently, including the healthcare and financial sectors as well as governance, defence, and the justice system, among others. It is extremely quick and innovative and it complicates the entire legal responsibility issue even more because we were used to have human beings regulate the information and take some actions. AI use plays huge legal and ironic dilemmas including: autonomy system responsibility, machine discrimination, software copyright, and the use of AI in war and medicine. Taking a comparative approach to the methodology, the paper questions the global regulatory frameworks such as the European Union AI Act, the bill on AI in the United States of America, Chinese AI regulations, and OECD AI principles. It offers a kind of compounded liability structure, proposes the overwhelming relevance of clarity and intelligibility, and underlines the necessity of strengthening regulatory and judicial empowerment. Lastly, to ensure accountability in the age of AI, we must actual find a very narrow middle ground between looking after the underprivileged, fulfilling the interests of the greater economy and advancing technology.
Keywords: Artificial Intelligence, Legal Responsibility, Liability, AI Regulation, Hybrid Liability, Algorithmic Bias, Jurisprudence
INTRODUCTION
Artificial intelligence (AI) has transformed and become a significant change maker in quite several areas within a very short duration of time. Previously, AI was simply an instrument of the tech-savvy, but today the phenomenon has invaded nearly every aspect of human activity – from healthcare, finance, governance, and defence to the justice system. The potential of the technology to perform functions that were previously performed by humans through the application of intelligence such as logical reasoning, prediction, and learning has caused most people to anticipate a situation whereby the technology will deliver efficacy and innovation. This is, however, the other side of the coin since it is this autonomy that is giving AI the right under the sun that is also proving difficult to understand who is responsible when the wrongs are done and the AI systems produce defective results. Previously, the law did not view non-human beings or objects as legal persons, therefore the element of human conduct was focal in cause of actions and rested on the principles of mens rea (intention) and actus reus (illegal act). This concept of legal liability is problematized in the autonomous systems to such a degree that the responsible party is not always apparent. Is it programmers who designed the system, companies who sold and installed it, users, or the artificial intelligence? All the above problems suggest that the need to consider an issue of legal responsibility with a fresh look is unfortunately urgent as the technology continues recording a rapid developmental pace. Events that occurred in the recent past bear witness to the global importance of finding solutions to these troubles. Events that occurred in the recent past bear witness to the global importance of finding solutions to these troubles. In June 2024, the European Union enacted the AI Act that outlines the first comprehensively described legal framework regulating AI and proposes a risk-driven approach as a method to insist on responsibility. The US is also utilising various sector-specific regulations and the AI Bill of Rights of 2022 intending to safeguard individuals against the harms of the algorithm. China has established close behind the scenes surveillance of the generative AI platforms as India continues to find solutions to the problems of AI through the assistance of data and other cyber laws in lieu of the independent AI law. One of the high-profile cases revealing the shortcomings of the current legal system when addressing issues connected with AI is the 2018 fatal Uber self-driving car crash in Arizona, and the interminable copyright lawsuits involving generative AI. In the light of the legal responsibility, this paper explores the issues brought up by AI. It examines other global policies via a comparative approach mechanism, presents the trends in which modern jurisprudence fails on, offers how equilibrium between creativity and responsibility in the AI era may be attained.
UNDERSTANDING ARTIFICIAL INTELLIGENCE IN LEGAL CONTEXT:
Artificial Intelligence (AI) refers to the capability of human thought replication through machines that acquire knowledge and evolve over time to make free choices. The problem of AI beyond a technological effect creates challenging legal concerns. These systems provide confusion in terms of who is responsible or accountable in regard to their operations because such systems can operate at their own initiative.[1]
DEFINING ARTIFICIAL INTELLIGENCE:
Artificial Intelligence (AI) is claimed to be a capability designed to allow machines to imitate how human beings think, acquire knowledge, and make automatic decisions. Although it should not be refuted that this area has high potential in the area of innovation, there are, however, difficult legal and ethical issues connected to AI. Since such systems can be operated with some level of autonomy, they make it hard to stick to conventional concepts of responsibility and accountability.
There seem to be a range of definitions of AI, but most broadly, it is categorised into three categories. Narrow or weak AI is that AI with a single object. To illustrate, it can also operate a chatbot, a facial recognition application, or make recommendations of movie and other products using recommendation systems. Conversely, strong AI (sometimes referred to as general AI) hasn’t become very real. Rather the category is inclusive of a concept of a machine of wide intelligence, which can reason and be able to solve problems in diverse fields just like human beings do. Third and the last category is autonomous AI systems networks that consist of systems that make decisions and can act independently without human control, which includes autonomous technology like military drones and self-driving cars.
This is a decisive factor in the legal realm. The big question here is whether artificial intelligence should be perceived as a one-dimensional instrument that people can manage or whether it is a free and equipped entity that is worthy of its own type of responsibility and must be addressed by the law.[2]
ROLE OF AI IN DIFFERENT SECTORS:
Research of AI in various industries demonstrates that AI is promising and it raises legal issues. In medicine, such as with a diagnosis system, a faster treatment method and a smarter system can be faster and more accurate, but the system can create issues of responsibility if mistakes are made. It is defined as the use of algorithmic trading in financing, that can enhance efficiency in the markets, but may also result in massive disruption in the market. Predictive systems such as COMPAS have been applied in the justice system of the United States to determine reoffending conditions.[3] Nevertheless, these instruments have been reviewed heavily as embodying racial inequality. Another challenge posed by transportation is that autonomous vehicles may decrease the number of accidents from human error, but it can also make it more difficult to find responsible people in the event of a crash. Add to that, the proliferation of AI in the governments (such as facial recognition) has, once again, led to the emergence of severe issues concerning privacy and the fundamental rights of people.
RELEVANCE OF AI IN JURISPRUDENCE:
AI poses a threat to custom norms of law because it presents circumstances in which responsibility is exceedingly difficult to establish. Is it possible to hold a machine which is unaware or unintended liable? But in the cases of AI harm do we owe it to the developers, the organizations deploying AI, or the ultimate consumers? The answers to these questions are important to create a set of rules that can simultaneously support the development of AI within society. The consideration of AI as the matter of legal responsibility assists in stating the discrepancies in the existing policies and referring to the inevitable changes in forming new regulations and standards.
CONCEPT OF LEGAL RESPONSIBILITY IN JURISPRUDENCE:
Jurisprudence is founded on law and its role is to ensure the accountability of individuals and institutions against their deeds and omissions. As a custom, responsibility is identified via civil, criminal and tort law. With AI systems ready to operate independently, the law gets provoked, and some concerns, including responsibility, appear to be raised.
TRADITIONAL DOCTRINES OF RESPONSIBILITY:
If a harm or liability was a result of a mistaken/ wrong act, civil liability guarantees compensation. In criminal law, the unlawfulness of an act (actus reus) combined with an intention to commit the act (mens rea) establishes a criminal liability. In what are known as high-risk activities, strict liability exists that puts the person in charge without the need to establish guilt. These ideologies wring their necks around human activity and will that does not necessarily correspond to the behaviour of affected systems.[4]
CHALLENGES OF APPLYING TRADITIONAL CONCEPTS TO AI:
The artificial intelligence systems can commit evil acts without human probing. This casts provocative questions of the criminal law that typically used to depend on mens rea and actus reus as the traditional principles. No human had the intention or caused a harmful act by an AI system, so who is the one to hold responsible in such cases? There is no answer here and the existing legal principles can give not so much help.[5]
EXISTING APPROACHES TO AI LIABILITY:
According to some researchers, use and existence of AI systems should be subjected to product liability such that the developers and manufacturers are subject to flawed AI improvisations. Some suggestions include vicarious liability where the operators/employers of the AI systems have liability.[6] Today, it is common to believe that there is the more sensible method of distributing the responsibility between developers, deployers and end users with each participating in their roles based on who actually inflicted the harm in such a scenario.[7]
JURISPRUDENTIAL DILEMMAS:
There are more philosophical concerns about AI raised as well. And is it possible to ever speak of an autonomous system both as a possibility to take on a legal person just like corporations? Giving AI the status of a legal person is a big concern because in that case, it becomes less accountable to humans who design and utilize AI. The other difficulty lies in the inability of the law to address violations inflicted by AI due to difficulties in adequately identifying human negligence. This is indicated in these questions as it becomes hard to cling to the traditional and legal principles and transform the law to keep up with fresh technology.
CURRENT GLOBAL DEVELOPMENTS IN AI REGULATION
The increased prevalence of Artificial Intelligence (AI) has made governments of most countries worldwide establish legislative and regulatory bonds to hold accountable, safe, and ethically-appropriate individuals. They all involve different methods, but these models illustrate how to manage the issue of AI.
EUROPEAN UNION: THE AI ACT (2024)
In 2024, the European Union proposed the AI Act the currently worldwide complete framework of policies for the regulation of artificial intelligence. The Act assesses the level of risk posed by a particular system and does not treat all AI in the same way. Certain applications, such as social scoring, are perceived to be too bad and prohibited altogether. In the case of touchpoints like healthcare, policing, or job hiring, AI is viewed as a high-risk measure. In such situations, the law dictates that there should be stringent guidelines to ensure these systems are transparent and open, responsible, and continuously monitored by humans.[8] Low or minimal risk AI has fewer limitations. In this way, the EU attempts to find a balance – on the one hand, the fundamental rights of people should be preserved; on the other hand, it is essential to promote innovation.
UNITED STATES: SECTORAL REGULATION AND THE AI BILL OF RIGHTS (2022)
A fully developed AI law is still a thing that the United States does not possess. Regulation is mostly sector-specific instead (hardware, finance, and self-driving vehicles), and these regulations are quite diverse. The White House released the AI Bill of Rights in 2022 that outlines the protection of individuals against the harms of AI. These protections are right to safe and effective systems, anti-discrimination safeguards against the use of algorithms, data protection, disclosure of the use of AI, and the provision of human-only as needed. The changing nature of courts has been outlined by judicial intercessions in ongoing copyright and intellectual property disputes that generative AI are involved in.[9]
UNITED KINGDOM: PRO-INNOVATION APPROACH
United Kingdom has chosen to employ a more liberal approach to regulation that focuses on something innovative. Instead of having one AI law, the UK depends on industry regulation and asks regulators to provide specific industry guidance. This practise aims at ensuring a balance between the advancement of technological development and preservation of personal rights.
CHINA: GENERATIVE AI REGULATIONS (2023)
The approach to the regulation of AI that China has taken is strict and state-centric. New regulations regarding generative AI platforms were also offered in 2023 with a requirement of its alignment to national ideals, control of content, and responsibility of the adverse consequences. This framework upholds state control, risk management and social stability to the disadvantage of the operations of the private developers.
INDIA: RELIANCE ON EXISTING LAWS
India has still not passed a specific law in AI. The existing law is based on the Information Technology Act of 2000 and Digital Personal Data Protection Act of 2023. Plans in the Digital India framework say that the government would like to come up with AI governance standards, but the work is only at the consultation phase. The strategy of India is to stimulate innovation and focus on the possible dangers, such as bias, misinformation, and privacy infringement.[10]
EMERGING LEGAL CHALLENGES IN AI AND RESPONSIBILITY
Although the application of Artificial Intelligence has numerous advantages in every industry, its usage has raised numerous questions in legal arenas. Those challenges present the gaps in the old-fashioned liability approach and highlight the necessity of the new laws and policies. A number of modern examples of these problems exist.[11]
AUTONOMOUS VEHICLES AND ACCIDENT LIABILITY
Self-driving cars are one of the most urgent questions in the field of the legal responsibility. It is difficult to identify who is liable where an autonomous vehicle has resulted in an accident. And then who is to bear the blame: the manufacturer, the software developer, the owner of the vehicle or the AI system?[12] One prominent instance is the accident involving an Uber self-driving car in Arizona in 2018 and the accident led to the first death involving an autonomous car. [13] Research found out the AI system did not have enough time to identify the pedestrian. Another issue presented through the case was a disagreement on whether Uber, the AI creators, or the safety driver monitoring the car should be the people liable. Conventional road and tort law, which supposes the control of human drivers, did not offer much help in resolving the dispute.[14]
GENERATIVE AI AND COPYRIGHT INFRINGEMENT
Text generative AI systems, image generative AI systems, and music generative AI systems have caused legal tussles over intellectual property. In 2023, New York Times sued the OpenAI and Microsoft on grounds the two AI models used copyrighted content without licence when training the models.[15] Cases like these are still brought by authors, artists, and software developers posing the question of whether AI training is fair use and whether the works produced by AI can be copyrighted themselves.[16]
ALGORITHMIC BIAS AND DISCRIMINATION
The AI systems have the possible effect of reinforcing or increasing biases held on the data that trains them, leading to discriminatory results. The COMPAS algorithm, which is applied in the United States to calculate criminal recidivism, was not spared as it was found to discriminate against African American defendants by assigning them high-risk classification.[17] These aspects emphasise the challenges of taking legal accountability of the algorithmic bias within the existing anti-discrimination law.[18]
JURISPRUDENTIAL QUESTIONS OF AI RESPONSIBILITY
The emergence of Artificial Intelligence (AI) has raised the most basic elements of jurisprudence to the surface. In contrast to conventional technologies, artificial intelligence systems have independence, learning skills and decision-making abilities that question the traditional conceptualizations of legal responsibility. This part looks into the main theoretical and legal federal issues raised by AI.
CAN AI BE CONSIDERED A LEGAL PERSON?
The recognition of non-human persons in the law, including the corporations, as a legal person to possess rights and obligations, is already in existence. Other researchers believe that it might be applicable to AI systems, which would be held responsible, enter into a contract, or even accept liability. [19] Those opposed to AI believe the systems are not conscious or intentional and have no moral agency and legalisation might demean human responsibility. This discussion is very speculative but very vital in the perception of any prospective legal reforms.[20]
STRICT LIABILITY AND VICARIOUS LIABILITY:
Strict liability leaves persons or parties responsible of injury independent of fault and vicarious liability puts responsibility upon those in charge or who employ others. Using these principles in the case of AI implies that developers, manufacturers or operators of autonomous systems are responsible toward harms. But, as AI becomes more autonomous, it moves further to the question of whom and how much control over it is needed to justify liability.[21]
HYBRID MODELS OF RESPONSIBILITY:
That being the case since there are constraints of traditional doctrines, there are hybrid models of liability that have been put forward. Out of these types of models, the responsibility is shared among the stakeholders. Design weaknesses and programme bugs are the responsibility of developers, careless monitoring or misuse by deployers and inappropriate reliance on AI outputs of users.[22] This is a strategy that aims at creating a balance between responsibility and autonomous AI operations.
COMPARATIVE PERSPECTIVE: LESSONS FROM GLOBAL FRAMEWORKS
Countries globally have taken varied ways of regulating the use of Artificial Intelligence (AI) because of various legal traditions, societal priorities, and technological advances. Comparative analysis will be useful in terms of best practise and lessons that may be used to inform future regulation frameworks.[23]
EUROPEAN UNION: GDPR AND THE AI ACT
The European Union has led the way in the control of AI. Article 22 of the General Data Protection Regulation (GDPR) gives protection against the use of fully automated decisions that have substantial impacts on individuals allowing the right to human intervention and challenge decision-making.[24] It is based on this that build-up the AI Act of 2024 presents a risk-based regulatory system. However, high-risk applications of AI, like the applications in healthcare, the military, and the job market, are subject to stiff criteria related to transparency, human oversight, and accountability.[25] When applications are less risky, then tighter regulation is weakened. The EU model is characterised by a humanistic approach which focuses on balancing innovation and fundamental rights security.[26]
UNITED STATES: SECTORAL REGULATION AND THE AI BILL OF RIGHTS
The United States has not yet passed an extensive AI statute, instead of dealing with regulations, sector-specifically. Special frameworks apply to such spheres as healthcare, finances, and self-driving cars. The 2022 White House publication of the AI Bill of Rights has established protections to individuals, such as safe and effective AI systems, protection against algorithmic discrimination, and more information about AI utilisation, as well as examples of data privacy and access to alternatives.[27] This strategy promotes innovation and tries to mitigate possible harms but the inconsistency in application may bring lack of clear definition.[28]
UNITED KINGDOM: PRO-INNOVATION FRAMEWORK
This has been a laxative, pro-innovation policy of the United Kingdom. The UK does not focus on the development of one AI statute but instead on industry-specific guidance and discretion of regulation. This plan enables technological growth, yet it offers solutions through which individual rights may be maintained within the high risk application.[29]
CHINA: STRICT STATE-CENTRIC OVERSIGHT
China has engaged in a strict AI regulation especially when it comes to generative AI systems. Policies are oriented on monitoring content, national security and compliance with its societal values.[30] Although, this method gives a top priority to risk management and social stability, it can suppress the freedom of individual developers and inhibit novelty.[31]
INDIA: LESSONS FROM GLOBAL PRACTICES
India is now controlling AI based on perpetually existing rules, like Information Technology Act of 2000 and Digital Personal Data Protection Act of 2023.[32] International practises offer effective suggestions applicable to India, such as the introduction of risk-based regulation to high-impact AI systems, transparency and explainability, developing clear liability models for developers, deployers, and users, designing judicial and regulatory capacity to deal with AI-related disputes.[33]
THE WAY FORWARD: ADDRESSING AI AND RESPONSIBILITY
The fast changes and implementation of Artificial Intelligence (AI) require respondent and proactive approaches towards legal regulations. In order to be accountable, defend the fundamental rights, and foster technology, policymakers, regulators, and the judiciary need to take a holistic approach to the risks posed by AI.
ADOPTING A HYBRID MODEL OF LIABILITY
A middle way approach to liability will create a strike of balance between accountability and creativity. The distribution of responsibility should be distributed among the developers, deployers, and users according to their position in the lifecycle of artificial intelligence. In situations involving high-risk AI, strict liability may be used, and in situations where the torts are committed by persons in control of AI systems or those who macro-manage AI systems, vicarious liability may apply. Specialised industry standards are needed in such industries as healthcare, finance, and transportation.[34]
ENSURING TRANSPARENCY AND EXPLAINABILITY
Numerous AI systems are black box systems that give an output without a clear traceable reasoning. Laws need determination of the decision making process. By enforcing accountability through mandatory transparency and explainability of high- risk AI systems, independent audit of algorithms can be guaranteed, and courts can find it easier to assign liability.[35]
STRENGTHENING REGULATORY AND JUDICIAL CAPACITY
Regulatory bodies specifically dedicated to AI, like in the European Union and China, can focus on both compliance, risk evaluation and enforcement. Judicial education is also required in order to prepare judges and other legal experts with technical skills that can guide them in handling AI-related lawsuits.[36] Policies and regulatory systems can be more exact and flexible without being too inflexible to be innovative.[37]
PROMOTING INTERNATIONAL COOPERATION
The usage of AI is global and, by definition, shows cross-border exchanges, as well as international deployments. Consistency, fairness, competitiveness may be secured with the help of working out harmonised international principles, including the OECD AI Principles, compatibility with the EU AI Act.[38] Through international cooperation, it is necessary to combat some issues like algorithmic bias, data privacy, and cross-border AI offences liability.[39]
CONCLUSION
The concept of Artificial Intelligence (AI) is one of the most ground-breaking technologies of the twenty-first century, and it brings unprecedented potential in the fields of healthcare, finances, transport, governance, and law. Coupled with it, AI undermines established ideas regarding the law liability and responsibility. The doctrines that are in place, based on the will and behaviour of people, are poorly positioned to handle evils posed by the autonomous or semi-autonomous AI systems. This paper has examined the new frontiers of AI using the jurisprudential perspective of cases including the Uber self-driving car crash in Arizona, the current debates around AI copyright, the system predictive biases in the justice system, and the application of autonomous AI within the military. Global regulatory frameworks such as the AI Act of the European Union, the AI Bill of Rights of the United States of America, the generative AI rules of China, and the OECD principles of AI suggest the variety of practises adopted to combat those obstacles through comparison. A new approach towards liability, mandating openness of algorithms, explainability, and ethical adherence is necessary to toke charge off both the farmers, deployers and consumers. Increased judicial and regulatory capacity, inter-country cooperation and raising public awareness is also important. Finally, to be legally responsible in the era of AI, it is necessary to restrain oneself and strike a balance between individual rights and social beneficiary, as well as permit technological innovation to exist and thrive. Lack of legal frameworks and proactive regulation means that the transformative potential of AI may be full of legal ambiguity, harms, and moral dilemmas.
[1] Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 1–2 (3rd ed. 2016); Ryan Calo, “Robotics and the Lessons of Cyberlaw,” 103 California Law Review 513, 514 (2015).
[2] John McCarthy et al., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (1955) (unpublished manuscript); Elaine Rich & Kevin Knight, Artificial Intelligence 4–5 (3rd ed. 1991).
[3] Jon Kleinberg et al., “Discrimination in the Age of Algorithms,” 133 Journal of Economic Perspectives 121, 124 (2019).
[4] W. Prosser & W. Keeton, Prosser and Keeton on Torts §30 (5th ed. 1984); 4 W. Blackstone, Commentaries on the Laws of England 20 (1769).
[5] Kimberly Krawiec, The Body of the Law: Regulation by Artificial Intelligence (forthcoming, on file with author).
[6] Matthew Henry, “Product Liability for AI Systems,” 72 Texas Law Review 801, 812 (2020).
[7] Andrea Renda et al., Liability and Artificial Intelligence: A European Perspective 33–35 (2021).
[8] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, O.J. (L) (AI Act); European Commission, AI Act Enters into Force 1 August 2024 (Aug. 1, 2024).
[9] White House, Blueprint for an AI Bill of Rights (2022), available at: https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (last visited Aug. 27, 2025); Benjamin Larsen, “What’s in the US ‘AI Bill of Rights’ — and What Isn’t,” World Economic Forum (Oct. 14, 2022).
[10] Amlan Mohanty & Shatakratu Sahu, India’s Advance on AI Regulation, .Carnegie Endowment (Nov. 21, 2024).
[11] Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information 19–20 (2015).
[12] Bryant Walker Smith, “Automated Driving and Product Liability,” 2017 Michigan State Law Review, 1, 3 (2017).
[13] Niraj Chokshi, Self-Driving Uber Car Kills Pedestrian in Arizona, Officials Say, New York Times (Mar. 19, 2018), available at: https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html (last visited Oct. 3, 2025).
[14] Gary E. Marchant & Rachel A. Lindor, “The Coming Collision Between Autonomous Vehicles and the Liability System,” 52 Santa Clara Law Review 1321, 1325–26 (2012).
[15] Complaint, New York Times Co. v. Microsoft Corp., No. 1:23-cv-11195 (S.D.N.Y. Dec. 27, 2023).
[16] Rebecca Tushnet, “Copyright Law, AI Training, and Fair Use,” 69 Journal of the Copyright Society of the U.S.A. 123, 125–27 (2024).
[17] Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (last visited Oct. 3, 2025).
[18] Aziz Z. Huq, “Racial Equity in Algorithmic Criminal Justice,” 68 Duke Law Journal 1043, 1046–47 (2019).
[19] Shawn Bayern, “The Implications of Modern Business–Entity Law for the Regulation of Autonomous Systems,” 19 Stanford Technology Law Review 93, 95 (2015).
[20] Jack M. Balkin, “The Path of Robotics Law,” 6 California Law Review Circuit 45, 47–49 (2015).
[21] Ugo Pagallo, “Robots of Just War: A Legal Perspective on Autonomous Weapons,” 3 Human Law & Ethics Review 47, 49–51 (2017).
[22] Andrea Bertolini, “Artificial Intelligence and Civil Liability,” 7 European Journal of Comparative Law 1, 15–17 (2019).
[23] Mireille Hildebrandt, Law for Computer Scientists and Other Folk 233–35 (2020).
[24] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation), 2016 O.J. (L 119) 1.
[25] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, O.J. (L) ___ (AI Act).
[26] Lilian Edwards, “Regulating AI in Europe: Between Human Rights and Market Making,” 27 International Journal of Law & Information Technology 1, 5–6 (2019).
[27] White House, Blueprint for an AI Bill of Rights (2022), available at: https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (last visited Oct. 3, 2025).
[28] Margaret Hu, “Algorithmic Jim Crow,” 86 Fordham Law Review 633, 639–42 (2017).
[29] UK Department for Science, Innovation & Technology, A Pro-Innovation Approach to AI Regulation (Mar. 2023).
[30] Cyberspace Administration of China (CAC), Interim Administrative Measures for Generative Artificial Intelligence Services (2023) (China).
[31] Rogier Creemers, “China’s Social Credit System: An Evolving Practice of Control,” 10 Maastricht Journal of European & Comparative Law 23, 25 (2019).
[32] Information Technology Act, 2000, No. 21 of 2000, India Code; Digital Personal Data Protection Act, 2023 (India).
[33] Rahul Matthan, “India’s AI Governance Strategy: Between Innovation and Regulation,” Economic Times (Sept. 2, 2023).
[34] Andrea Bertolini, “Artificial Intelligence and Civil Liability: A European Perspective,” 25 European Review of Private Law 755, 770–71 (2017).
[35] Cary Coglianese & David Lehr, “Transparency and Algorithmic Governance,” 71 Administrative Law Review 1, 5–7 (2019).
[36] European Commission, Coordinated Plan on Artificial Intelligence 2021 Review (Apr. 2021).
[37] Matthias Leistner, “AI Regulation and the Role of Courts,” 12 Journal of European Competition Law & Practice 561, 563–64 (2021).
[38] OECD, Recommendation of the Council on Artificial Intelligence (May 22, 2019).
[39] Thomas Burri, “International Law and Artificial Intelligence,” 60 German Yearbook of International Law 91, 97–98 (2017).
