Search
  • The L word Blog

ARTIFICIAL INTELLIGENCE AND INTERNATIONAL CRIMINAL LAW

Written by: Tarun Sharma &Aparna S. Narayan, Students, University of Petroleum and Energy Studies, Dehradun

WHAT IS ARTIFICIAL INTELLIGENCE?

Artificial Intelligence specifically to the readers of fantasy and fiction is not a brand new concept in the present scenario although it is not very common in every field but in many fields, its use is increasing day by day at the very grass root level. It is changing into a lot of science and a few fictions. The world of technology is dynamic in nature, computers and robots in the current era are substituting human activities. Artificial Intelligence is the capability of a machine to imitate intelligent behavior.[1] It associates with an umbrella term that refers to information systems impressed by biological systems, and encircle multiple technologies as well as machine learning, deep learning, laptop vision, linguistic communication process, machine reasoning, and robust Artificial Intelligence (AI). Prima facie Artificial Intelligence has the potential to be a permanent part of our criminal justice system, providing investigative assistance and allowing criminal justice professionals to be better maintain public safety.

LEGAL REVIEW

According to the First Additional Protocol of Geneva Convention that provides that the States must fulfill the legal criteria or their obligations to determine whether the employment of a new weapon like AI, means or method of warfare would be prohibited by International Humanitarian Law or any other rules of International Criminal Law in different circumstances.[2] The legality of a weapon in War crimes can be assessed using the following criteria:

· First, the questions arises that whether the new weapons prohibited by International Conventions, such as the Biological Weapons Convention, Chemical Weapons Convention or Convention on certain Conventional Weapons?

· Second, whether the newly introduced weapons cause superfluous injury or unnecessary suffering, or widespread, long- term and severe damage to the natural environment?[3]

· Third, whether the weapons likely to have the nature of indiscriminate attacks?[4]

· Lastly, will the new weapons respect the principles of humanity and dictates of public conscience?[5]

This shows that AI weapons need to be incorporated in the legal framework of International Humanitarian Law (IHL) with no exceptions. The principles and the rules of IHL should and shall be applied to AI weapons.

PRECAUTIONS DURING EMPLOYMENT

Humans make mistakes. It does not matter that to comply with this issue machines should be used but the problem is that machines also make mistakes, how much intelligent they are. Since AI are designed by humans, but the machines do not have that level of emotions, sensitiveness and understanding that humans have. The consequences of illegal acts of AI and legal responsibilities arising from their illegal acts must be attributed to humans. Humans cannot give an excuse that the act was done by the machine to dodge their own responsibility. If it does so then it will be against the spirit and value of the law. The word ‘Combatant usually known for human being in the war crime that should not be used and implemented for artificial machines under International Criminal Law, therefore all feasible precautionary measures need to be taken before employment of such weapons accordance with the fundamental rules of IHL.[6]

ACCOUNTABILITY AFTER EMPLOYMENT

If it considers that humans are responsible for the employment of AI weapons in the armed conflicts, then the question arises that who, among of these humans, holds responsibility? Many researchers say that the end users have to take the primary responsibility for the wrongful acts of the weapons. There is a legal responsibility known as ‘Individual Criminal Responsibility’[7] that applies on an individual human being. In ‘International Criminal Law’ the method of unlimited means of warfare is illegal. There should be human control over the use of AI weapon systems. Additionally, the States to which they belong incur State responsibility for such serious violations which could be attributable to them.

ETHICAL ASPECT

The Lethal autonomous weapon systems have a significant challenge to human ethics. AI weapons have no feelings like humans that create a higher chance that their use will result in violations of IHL rules on methods and means. AI weapons are unable to identify the willingness to fight of a human that whether it is a civilian object or military object? AI weapons are unable to respect the principles of military necessity and proportionality. They even significantly impact universal human values of equality, liberty and justice. The AI weapons do not know ‘right to life’, therefore not able to enforce it discriminately. There are many objections against the autonomous weapons that are situated outside IHL. It is obvious that robots can neither behave morally nor immorally.[8] It will be very difficult to claim that robots can be more humane that humans,[9]

ONLY HUMAN BEING DECIDE TO KILL ANOTHER HUMAN BEING

A robot may kill a human being to be horrible. It is implied in International Humanitarian Law for a human to kill a person.[10] It is claimed that, it is wrong to let autonomous machines decide who and when to kill.[11] The incidental attacks that target the civilian object even if it is proportional in nature can be ignored by humans. In addition to this it is to be lawful that autonomous weapons need to be subject to general instructions given to them by humans. The system only follows those instructions autonomously to a given situation. Beyond this, it cannot be imagined that one human being deliberately killing another human being in the absence of immediate threat as it is can be imagined only in case of a machine doing the same. But in the case of war, killing in the absence of a threat is sometimes lawful.

CONCLUSION

The question still remains the same that whether AI will completely replace human resources and the robotic wars will emerge. It can be observed that there is a huge gap between the developed and developing nations in terms of AI technologies capabilities. Some counties are competent to employ AI machinery in the battlefields and some are not. In such cases, it will inevitably be required to assess the legality of AI weapons and their employment, and IHL will be resorted to. Because of this imbalance among the countries with respect to military technologies will probably cause the divergence in the application and interpretation of existing IHL rules. Nevertheless, it is important to note that the application of International Humanitarian Law to Artificial Intelligence weapon systems is beyond all doubt.

References

[1] N.P. Padhy, Artificial Intelligence and Intelligent Systems 3 (2005). [2] API, art. 36. [3] API, art. 35. [4] API, art. 51. [5] API, art. 2(2). [6] API, art. 57. [7] ICCSt, art. 25. [8] . Noel E. Sharkey, The Evitability of Autonomous Robot Warfare 94, 787, 793 (2012). [9] Regulations Concerning the Laws and Customs of War on Land, annexed to Hague Convention (IV) Respecting the Laws and Customs of War on Land 2227 (1907). [10] United Nation, Human Rights Watch, Losing Humanity: The Case against Killer Robots ( Oct 27, 2012 3:31 PM), http://www.hrw.org/reports/2012/11/19/losing-humanity. [11] United Nations, Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Report 39 (2013).

Subscribe Now

Contact Us

 © 2020 All Rights Reserved. Created by Paras and Team