Author(s): Maitri Pillai and Maria Abraham
Paper Details: Volume 4, Issue 1
Citation: IJLSSS 4(1) 37
Page No: 430 – 442
“The potential benefits of artificial intelligence are huge, so are the dangers.”
~ Dave Waters
RESEARCH QUESTIONS
- How do autonomous weapons systems challenge the current norms and principles of international criminal law?
- In what ways are AI algorithms liable as co-perpetrators/accomplices of international crimes?
- What ethical and legal complexities emerge in apportioning liability between AI entities and their co-perpetrators/accomplices, as well as their creators?
- In what ways could AI-based predictive policing reshape international human rights laws prohibiting crimes against humanity?
- How could international criminal law evolve to address the AI-mediated escalation of biases/inequalities, especially with regard to war crimes and genocide?
RESEARCH OBJECTIVES
- Investigate the role of autonomous weapon systems in establishing the dominant norms and standards of modern international criminal law, with particular emphasis on the challenges such technologies pose to the established notions of accountability, proportionality, and distinction in international armed conflicts.
- Determine the international laws that allow algorithmic systems with attributes of artificial intelligence to function as associated actors or co-perpetrators of crimes under international criminal law, as well as the conditions under which AI systems may function as associated actors.
- Consider the ethical and normative issues of AI systems in the context of international criminal law, as well as the differences between the obligations of AI system developers/users and AI systems in the context of international laws on the prosecution of war crimes.
- Examine the role of AI systems in policing, with particular emphasis on the potential of AI systems to prevent crimes related to crimes against humanity, as well as the legal implications of such potential, including algorithmic biases and human rights issues related to the predictive capabilities of AI systems.
- In the context of armed conflicts, including war crimes and genocide, the ethical issues of AI systems in the development of the legal and methodological foundations of international criminal law, with particular emphasis on the potential of AI systems to reduce the risks of abuse and discrimination in international armed conflicts with the aim of improving accuracy in warfare.
INTRODUCTION
In a world where technology can envisage crimes, recognize targets and influence decisions through its integration in autonomous weapons system, the question regarding existing norms surrounding justice, accountability and human rights persists at the forefront. The digital age demands for a check on the international criminal law standards now more than ever due to the constant evolution and rapid integration of artificial intelligence into global governance framework. Conflict stricken regions face the brunt of this relationship between artificial intelligence and international criminal law as it poses a fundamental inquiry concerning the suitability of legal frameworks in dealing with the complexities introduced by these technologies. This particular research will seek to examine the issues of artificial intelligence with respect to autonomous weapons, predictive policing and decision-making algorithms, as well as give an appraisal of the impact of AI on some of the most basic principles of international criminal law. This will look at the possibility of having AI systems considered as co-perpetrators of international crimes. It will also further look at the challenges that arise from an ethical and legal perspective when linking criminal liability with AI systems and their operators. In addition, it will look at how AI has the potential for creating further biases and inequalities when it comes to war crimes and genocide. Finally, it will look at the legal and policy changes that are needed so that international criminal law becomes responsive to these challenges and continues to uphold justice and human rights in this age of AI.
RELEVANT LAWS, TREATIES, RESOLUTIONS, DECLARATIONS AND PRINCIPLES
GENEVA CONVENTIONS (1949)[1] AND ADDITIONAL PROTOCOLS (1977)[2]
The Geneva Conventions establish elementary notions of International Humanitarian Law, among the distinction between combatants and civilians, proportionality, and the principle of necessity. Autonomous weapons systems, capable of conducting operations without human intervention, are dangerous to these values. It is feared that such systems may not comply with International Humanitarian Law and that they might even be incapable of committing war crimes.
Martens Clause
The Martens Clause[3], enshrined firstly in the 1899 Hague Conventions and then restated by subsequent treaties, lays down fuller moral rules for the behaviour in time of war, thus protecting civilians and soldiers, even if they are not strictly covered by treaty. Given the rising prominence of AWS, the Clause needs to be taken as a measure for solving the ethical ramification associated with AI in combat, mostly in situations where legal criteria are not very well defined.
CONVENTION ON CERTAIN CONVENTIONAL WEAPONS (CCW) (1980)[4]
The CCW’s main aim has been to impose a prohibition on weapons, indiscriminate and widespread effects. Most of the debate concerns the prohibition or regulation of current lethal autonomous weapons systems. This agreement helps to analyze the legality and ethics of AWS in armed conflict, since it can potentially undermine some of the most basic norms of international criminal law.
ROME STATUTE OF THE INTERNATIONAL CRIMINAL COURT (1998)[5]
The Rome Statute is designed to consider human culpability for offenses within the jurisdiction of the ICC, such as genocide and war crimes. If AI becomes more deeply involved in the commission of crimes by selecting targets or making decisions when attacks are carried out states may have a duty to revise or reinterpret the Statute to include non-human actors.
NUREMBERG PRINCIPLES (1950)[6]
The Nuremberg Principles create individual liability for war crimes, including when the acts committed were on the basis of orders. Given the new involvement of AI in international crimes, it might be necessary to extend these principles to the person who develops or deploys an AI system. This brings forth the question of AI “agency” versus human accountability.
RESPONSIBILITY OF STATES FOR INTERNATIONALLY WRONGFUL ACTS (2001)[7]
This Act guarantees that states are liable for any infringement of their obligations under international law. Should a state utilize artificial intelligence in any manner that amounts to international crimes, these rules would apply to the state’s responsibility, thus drawing a clear line between the activities of the state and that of the autonomous systems in use.
INTERNATIONAL COVENANT ON CIVIL AND POLITICAL RIGHTS (ICCPR) (1966)[8]
The International Covenant on Civil and Political Rights (ICCPR) safeguards fundamental rights such as privacy and the right to a fair trial. The adoption of AI-driven predictive policing thus poses an intrinsic risk to fundamental rights, particularly to the marginalised, because it can give rise to biased and discriminatory behaviors. The very fact that an AI model can be used for crimes against humanity should concern and cause due evaluation of the results that have been brought out.
INTERNATIONAL CONVENTION ON THE ELIMINATION OF ALL FORMS OF RACIAL DISCRIMINATION (ICERD) (1965)[9]
The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) requires the state parties to eliminate all forms of racial discrimination. While the use of AI in conflict may increase the chances of racial biases infiltrating technology, most especially in target recognition and combat decisions, this is a treaty highly significant in altering the international criminal law category in response to the use of AI in fuelling bias for war crimes or genocide.
UNIVERSAL DECLARATION OF HUMAN RIGHTS (1948)[10]
The Declaration spells out the catalogue on what rights and freedoms are inalienable, including equality and non-discrimination. Through AI-based predictive policing, perhaps it is the nature of bias algorithm policies that may be considered to place most of these basic rights in jeopardy. It therefore touches on the question of preventing crimes against humanity.
HUMAN RIGHTS COUNCIL RESOLUTION ON THE RIGHT TO PRIVACY IN THE DIGITAL AGE (2015)[11]
This resolution turns out to be important in terms of taking steps to ensure the protection of privacy rights in times of digital technology, including artificial intelligence. Predictive policing involves extensive data analysis, thus posing significant privacy issues. Consequently, mechanisms of monitoring need to be put in place to ensure that there is no potential for human rights violations perpetuated by AI in those countries most vulnerable.
UN GUIDING PRINCIPLES ON BUSINESS AND HUMAN RIGHTS (2011)[12]
Although these are not legally binding principles, these guide the corporations, especially the AI-developing ones, in order to protect human rights. It is extremely important that the work by international criminal law be completed so as to make sure the AI technology is never used in violation of human rights, particularly during war times.
UN GENERAL ASSEMBLY RESOLUTION 78/241 (2023)[13]
This resolution was adopted by the United Nations General Assembly on 22 December 2023. The resolution reaffirmed the applicability of international law, including the United Nations Charter, international humanitarian law, and international human rights law, to autonomous weapon systems. The resolution highlighted various concerns, which cover humanitarian, legal, security, technological, and ethical aspects of AWS. The resolution also calls upon the United Nations Secretary-General to seek the views of the Member States of the United Nations on ways of addressing these concerns, with particular emphasis on the involvement of humans in the selection of the use of force.
Report of the United Nations Secretary-General on Lethal Autonomous Weapons Systems (2025)[14]
The report, which was published in 2025 in accordance with Resolution 78/241, brings together the positions of states on the challenge posed by lethal autonomous weapons systems (AWS) and calls on states to agree on a legally binding instrument by the end of 2026 on a prohibition on weapons that confer life and death decisions on machines. It emphasizes the importance of human involvement in the decision-making process in accordance with international humanitarian law (IHL) and raises issues with regard to moral objectionability and international criminal law (ICL).
JUDICIAL PRONOUNCEMENTS
While autonomous weapons systems is a rather new field with few judicial precedents explicitly addressing these issues, there are nevertheless some relevant legal rulings, cases, and events that can be used to provide some background:
PROSECUTOR V. THOMAS LUBANGA DYILO (ICC, 2012)[15]
The first ICC conviction for the crime of using child soldiers put front and the requirement to pin culpability on those who implement war crimes, even though it has no direct bearing on AI; it underlines the problem of prosecuting crimes involving complex systems and command responsibility, relevant to AI deployers in international crimes.
PROSECUTOR V. AL HASSAN AG ABDOUL AZIZ[16]
This case addresses the admissibility of evidence obtained through potential human rights violations. It would also call for attention that AI-generated evidence could also be challenged if it was gotten in a way that violates human rights, such as violations of data privacy. The ICC has ruled evidence that violates human rights may be excluded, while also reverberating the call for strict thresholds on admitting AI-generated evidence.
CLEARVIEW AI LEGAL CHALLENGES
This may be regarded as an example to ICC on the use of AI techniques, including facial recognition, as AI can be fuelled with illegally sourced data, as shown by the case of Clearview AI[17], which had to face international legal claims for violating privacy laws with unauthorized photo scraping. The legal decisions against Clearview set the precedent to assess the threats that AI systems may pose in the field of international criminal law.
EUROPEAN UNION ARTIFICIAL INTELLIGENCE ACT, 2024 (EU AI ACT)
The Act prohibits the use of AI in real-time identification and imposes liability for the category of high risk AI. Concretely, it regulates the use of AI in criminal justice, although it might not be directly applicable to ICC practice; it would influence ICC use of AI so that the evidence presented by AI tools does not violate human rights or lead to biased decisions. It might set precedence for international legislation on artificial intelligence.
WACHONWOO V. REPUBLIC OF KOREA[18]
It was a case about state surveillance violating the privacy rights enshrined in the ICCPR. It provides both a broader context for understanding how AI technology can infringe on private rights and legal concepts applicable if the ICC’s use of AI gives rise to such concerns.
Although there are not any directly relevant cases on AI in the context of international criminal law, it follows from the preceding examples how increasingly thorny legal issues of privacy, human rights, and admissibility of evidence may influence the application of AI in this area of the law. As AI will come to be increasingly infused into ICC, and indeed all international organization work, such issues will occupy the agenda for years to come, doubtless leading to the creation of new legal case law in due course.
CRITICAL ANALYSIS
The interaction between AI and international criminal law on issues of responsibility, appropriateness, and other core principles enshrined in most vital legal instruments, like the Geneva Conventions, the Rome Statute, and the Nuremberg Principles, is very complex. Given the rapid development of autonomous weapons systems and AI algorithms, the current regime, which was not designed to consider such developments, is in urgent need of review. The aim of the following review is to outline the issues and challenges brought forth by AI and to determine whether these can be dealt with by the current international legal regime or if new legal interpretations, not mentioning reforms, are required. Self-governing weapon systems, as introduced into modern conflict, present considerable threats to the very essential norms of international criminal law, most essentially under the Geneva Conventions and the Martens Clause.
THE INTERSECTION OF AUTONOMOUS WEAPONS SYSTEMS AND INTERNATIONAL CRIMINAL LAW
In modern war, autonomous weapon systems bring another great threat to international criminal law, more so with the Geneva Conventions and the Martens Clause. Operating independently, those systems prompt the critical question of whether their use is in accordance with the principles of differentiation, proportionality, and necessity. As long as AWS is unable to identify the obvious distinction between the belligerents and non-combatants and minimize the casualties faced by the civilians, the effectiveness of the Geneva Conventions would be undermined. The Martens Clause, by asserting the importance of human values or the rules of morality, identifies the morality of the AWS. The fact that there is an absence of human control in reaching a decision poses a serious threat to the very edifice of the doctrine of responsibility in war. The present legal regime under the Convention on Certain Conventional Weapons (CCW) has been criticized as being too slow in determining the legality of Autonomous Weapons Systems (AWS). This particularly calls into question the ability of international law to improve concomitant with technological innovation.
AI’S ROLE AS ACCOMPLICES IN INTERNATIONAL CRIMES
Such an approach moves the concept of the limits of the Rome Statute into the realm of classifying AI algorithms as co-perpetrators or accomplices in international crimes, a place normally hinged on considerations of human action and intent, or mens rea, in other words. As such, the use of AI in targeting or in the development of such strategic choices effectively blurs the limits of the Rome Statute. In this respect, the Rome Statute provides no point addressing a non-human actor, leaving a legal void regarding the question of whether a crime like genocide or war crime committed with the possible involvement of artificial intelligence can be prosecuted. The issue with the dilemma belonging to the responsible individual remains with the principles of Nuremberg. Although these principles lay emphasis on making humans liable for war crimes, at the same time, there is a need to extend these principles applied to the design, deployment, and overseeing of AI systems. At any rate, the utterly insufficient established legal precedent over the classification of AI as a legal entity under applicable international criminal law is a remarkable deficiency that cries out for reconsideration of the Rome Statute and related legal principles. Large ethical and legal dilemmas arise regarding how responsibility for AI systems could be fixed, compared with human operators or inventors.
ETHICAL AND LEGAL ISSUES IN HOLDING AI ACCOUNTABLE
The apportionment of accountability for artificial intelligence (AI) systems, as opposed to the humans or developers of such technology, has given rise to various ethical and legal concerns. The current legal frameworks, such as the Rome Statute and the Nuremberg Principles, are all based on human entities and their intentions. However, extending these ideas to AI systems has given rise to concerns regarding whether AI systems are capable of possessing intents or being agents. The lack of clear guidelines on apportioning accountability for AI systems has created a significant gap in the effective operation of international criminal law. Additionally, apportioning accountability for AI systems, as opposed to the developers or users of such technology, has given rise to various ethical concerns. This is because it has been proposed that there should be shared accountability for AI developers or users, as well as for AI systems as legal entities.
THE INFLUENCE OF AI-DRIVEN PREDICTIVE POLICING ON INTERNATIONAL CRIMINAL LAW
The deployment of predictive policing, as an aspect of artificial intelligence, has created a new dimension in the field of crime prevention, while at the same time evoking serious concerns about the protection of human rights under the framework of the law. The International Covenant on Civil and Political Rights (ICCPR) and the Universal Declaration of Human Rights highlight the importance of the protection of privacy, as well as the right to a fair trial. Predictive policing, as an aspect of data analysis aimed at preventing crime, is a major threat to the protection of human rights, especially when artificial intelligence targets certain vulnerable segments of the population. The Human Rights Council Resolution on the Right to Privacy in the Digital Age underlines the need to take the necessary measures in order to avoid the infringement of human rights, especially with regard to artificial intelligence. The lack of specific regulations regarding the implementation of artificial intelligence in the field of policing is a serious gap in the protection of human rights in the digital-technological era.
AI AND THE AGGRAVATE OF BIASES IN WAR CRIMES AND GENOCIDE
The application of artificial intelligence has the capability of escalating pre-existing biases and inequality, especially in war crimes and genocide-related situations. In these situations, the international laws need to be reevaluated, with emphasis on the improvement of the current regulations. The International Convention on the Elimination of All Forms of Racial Discrimination, as well as the Guiding Principles on Business and Human Rights, provide guidelines on the elimination of racial and ethnic prejudices. However, the application of AI in these situations has the capability of escalating biases and inequality, leading to discriminatory practices that are contrary to international laws. The current regulations do not seem effective at addressing the escalation of biases that may result from the application of AI.
CONCLUSION AND SUGGESTIONS
The interface of Artificial Intelligence and International Criminal Law manifests glaring loopholes in the extant legal regimes with respect to accountability, human rights and ethics. The extant laws, including the Geneva Conventions, the Rome Statute and the Nuremberg Principles, are grounded in a paradigm of humanity, which is no longer relevant to the complex challenges of AI-powered autonomous weapons. Therefore, an immediate need to recalibrate the extant laws with respect to AI-powered biases, human rights and accountability becomes imperative. Ultimately, the viability of international criminal law with respect to AI-powered autonomous weapons will not lie in its ability to adjust to technological innovations, but in its ability to sustain justice in an AI-dominated world.
RECOMMENDATIONS
- The Rome Statute should be amended to recognize AI entities as potential perpetrators under international criminal law and to establish the basis of accountability of AI entities and their developers/operators.
- The Geneva Conventions should be revised to address the unique challenges of autonomous weapons to the principles of distinction, proportionality and accountability.
- New treaties or protocols under the framework of conventional weapons should be developed to address the legal and ethical concerns of the employment of AI in war.
- The Martens Clause should be extended to ensure that the employment of AI in war is in conformity with stricter ethical guidelines and the moral values of the majority of the population.
- Stronger human rights protections, especially under the ICCPR, should be developed to preclude discriminatory practices and violations of privacy, especially with the advent of predictive policing that employs AI.
- An international supervisory body should be created that oversees the employment of artificial intelligence in war situations, ensuring that the rules of international law are followed, with proactive measures taken to preclude violations.
From this perspective, the recommendations that follow are intended to guide the evolution of international criminal law that is commensurate with the challenges posed by the employment of artificial intelligence, while at the same time retaining some of the core aspects of justice, accountability and human rights that become even more central in a technological-mediated world that the majority of the population is likely to inhabit. The idea of the employment of technology in war is the defining feature of the modern world, but it should not become the defining feature of justice itself.
BIBLIOGRAPHY
- Al-Qusi, H. (2018). The Problem of the Person Responsible for Operating the Robot – A Prospective Analytical Study in the European Civil Law Rules for Robots. Generation Journal of In-Depth Legal Research, 89-93.
- F. Santoni de Sio, ‘Four Philosophical Considerations of AI Ethics’, 2 September 2022, available online at https://medium.com/@reshaping_work/four-philosophical-considerations-of-the-ai-ethics-1e83e366e007 (last visited 10 August 2024).
- F. Santoni de Sio, G. Mecacci, ‘Four Responsibility Gaps with Artificial Intelligence: Why They Matter and How to Address Them’, 34 Philosophy & Technology (2021) 1057-1084, at 1070.
- Guido Acquaviva, Crimes without Humanity? Artificial Intelligence, Meaningful Human Control, and International Criminal Law, Journal of International Criminal Justice, Volume 21, Issue 5, November 2023, Pages 981–1004, https://doi.org/10.1093/jicj/mqad024
- Hallevy, G. (2013). When robots kill: Artificial intelligence under criminal law. Northeastern University Press.
- M. Bo and T. Woodcock, ‘Lethal Autonomous Weapons, War Crimes, and the Convention on Conventional Weapons’, The Global, 28 May 2019, available online at https://theglobal.blog/tag/international-criminal-law/ (visited 9 August 2024).
- M. deGuzman, Shocking the Conscience of Humanity: Gravity and the Legitimacy of International Criminal Law (OUP, 2020), at 89.
- Parliament, UK. (2016). Robotics and artificial intelligence. Report of the Committee on Science and Technology.
- Schwab, K. (2017). The Fourth Industrial Revolution – A Book in Minutes. In Summaries of international books. Mohammed bin Zayed Knowledge Foundation
- United Nations Congress on Crime Prevention and Criminal Justice. (2020). Current Crime Trends, Recent Developments, and Emerging Solutions, especially New Technologies as Means of Committing Crime and Tools for Combating Crime. Workshop at the Fourteenth Congress held in Kyoto, Japan.
[1] Geneva Convention for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field, Aug. 12, 1949, 75 U.N.T.S. 31.
[2] Protocol I Additional to the Geneva Conventions of August 12, 1949, and Relating to the Protection of Victims of International Armed Conflicts, June 8, 1977, 1125 U.N.T.S. 3.
[3] Hague Convention II with Respect to the Laws and Customs of War on Land art. 1, July 29, 1899, 32 Stat. 1803, 1 Bevans 247.
[4] Convention on Certain Conventional Weapons, Apr. 10, 1980, 1342 U.N.T.S. 137.
[5] Rome Statute of the International Criminal Court, July 17, 1998, 2187 U.N.T.S. 90.
[6] Principles of International Law Recognized in the Charter of the Nuremberg Tribunal and in the Judgment of the Tribunal, reprinted in Yearbook of the International Law Commission 1950, U.N. Doc. A/CN.4/SER.A/1950/Add.1 (1950).
[7] Responsibility of States for Internationally Wrongful Acts, U.N. Doc. A/56/10 (2001).
[8] International Covenant on Civil and Political Rights, Dec. 16, 1966, 999 U.N.T.S. 171.
[9] International Convention on the Elimination of All Forms of Racial Discrimination, Dec. 21, 1965, 660 U.N.T.S. 195.
[10] Universal Declaration of Human Rights, G.A. Res. 217A (III), U.N. Doc. A/810, at 71 (1948).
[11] Human Rights Council Resolution on the Right to Privacy in the Digital Age, U.N. Doc. A/HRC/29/L.32 (2015).
[12] Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework, U.N. Doc. A/HRC/17/31 (2011).
[13] Lethal Autonomous Weapons Systems, G.A. Res. 78/241, U.N. Doc. A/RES/78/241 (Dec. 22, 2023).
[14] Report of the Secretary-General: Lethal Autonomous Weapons Systems, U.N. Doc. A/79/88 (July 1, 2024).
[15] ICC-01/04-01/06, Judgment (14 March 2012)
[16] (ICC-01/12-01/18)
[17] The Information Commissioner v Clearview AI Incorporated, Neutral Citation Number[2025] UKUT 319 (AAC)
[18] (UN HRC Communication No. 1910/2009)
