Law Vision

The legality of artificial intelligence weapons

There is no legal definition regarding artificial intelligence (AI) weapons, no treaties and national legislation define such autonomous weapons system (AWS). In 2013 the US Department of Defense’s (DoD) defined AWS which is the most cited definition till date. The DoD described AWS as a system which once activated can select and engage a target without further human intervention. There are two types of system; one is an automated system, and the other is an autonomous system. An automated system is one which by computer reasons works like a rules-based system. Meaning that in such an order, the scientists fixed some input and output by coding, i.e., for each information, the system output will always be the same; it cannot determine the outcome independently. An autonomous system is one which does not depend on any specific output, meaning that it make guesses according to its sensors data then it could give different output. Such a system works like human intelligence.

International humanitarian law (IHL) has developed with two primary purposes: protection of civilian from the military target; and protection of militaries from unnecessary and cruel suffering. There are several treaties and treaty provisions to fulfil such fundamental purposes of IHL and to control or otherwise limit certain kinds of weapons and war strategy. Those provisions commonly stated to as means and methods of warfare. IHL has developed specific requirements to limit such methods and means of warfare. Every new weapon must follow the principles and provisions of IHL and other international laws applicable to parties; otherwise; such weapons will be illegal to use in armed conflict.

It is worth mentioning that still there is no specific regulation regarding AI weapons in IHL and other international laws, but it is well settled that any new weapons must comply with the principles and basic rules of IHL. In this perspective, Martin clause offers a moral linking with IHL which refers to that if any new weapons system not covered by any existing treaties or treaty provisions, civilians and combatants remain protected under the principles of humanity, and the public conscience.

The clause has developed based on Professor von Martens’ declaration at the Hague Peace Conferences 1899. First, it added in the preamble of the 1899 Hague Convention (II). Later it appeared in all Geneva Conventions (GC I: Art. 63; GC II: Art. 62, GC III: Art. 142; GC IV: Art. 158) and Additional Protocols (AP I: Art. 1; AP II: preamble). Art. 1(2) of AP I refers to that In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience’.

There is a legal dilemma concerning the interpretation of this clause. It is therefore subject to the variability of understandings, by and large, interpretation can fall into three types. The powerful States have typically favoured the most restricted interpretation under which the clause indicates that customary international laws will remain applicable after the adoption of treaty rules. Sundry scholars are holding more moderate interpretation that the clause can help to interpret other treaty provisions but cannot impose specific prohibitions.

The significant number of scholars are supporting the wide-ranging interpretation that the principles of humanity and the dictates of the public conscience provides a minimal threshold for judging any means and methods of warfare which are not expressly prohibited under IHL. It also provides a fundamental principle that something which is not explicitly prohibited by a treaty is not ipso facto permitted.

The rapid developments in AI and autonomous weapons system may remove the human from targeting and decision making in the battlefield and effectively substituted by an autonomous machine. However, there remain moral arguments regarding the capability of the machine to decide about human life or death. This issue can be solved by imposing some degree of human control over AI weapons.

The writer is an LLM Candidate, South Asian University, New Delhi, India.

Comments

The legality of artificial intelligence weapons

There is no legal definition regarding artificial intelligence (AI) weapons, no treaties and national legislation define such autonomous weapons system (AWS). In 2013 the US Department of Defense’s (DoD) defined AWS which is the most cited definition till date. The DoD described AWS as a system which once activated can select and engage a target without further human intervention. There are two types of system; one is an automated system, and the other is an autonomous system. An automated system is one which by computer reasons works like a rules-based system. Meaning that in such an order, the scientists fixed some input and output by coding, i.e., for each information, the system output will always be the same; it cannot determine the outcome independently. An autonomous system is one which does not depend on any specific output, meaning that it make guesses according to its sensors data then it could give different output. Such a system works like human intelligence.

International humanitarian law (IHL) has developed with two primary purposes: protection of civilian from the military target; and protection of militaries from unnecessary and cruel suffering. There are several treaties and treaty provisions to fulfil such fundamental purposes of IHL and to control or otherwise limit certain kinds of weapons and war strategy. Those provisions commonly stated to as means and methods of warfare. IHL has developed specific requirements to limit such methods and means of warfare. Every new weapon must follow the principles and provisions of IHL and other international laws applicable to parties; otherwise; such weapons will be illegal to use in armed conflict.

It is worth mentioning that still there is no specific regulation regarding AI weapons in IHL and other international laws, but it is well settled that any new weapons must comply with the principles and basic rules of IHL. In this perspective, Martin clause offers a moral linking with IHL which refers to that if any new weapons system not covered by any existing treaties or treaty provisions, civilians and combatants remain protected under the principles of humanity, and the public conscience.

The clause has developed based on Professor von Martens’ declaration at the Hague Peace Conferences 1899. First, it added in the preamble of the 1899 Hague Convention (II). Later it appeared in all Geneva Conventions (GC I: Art. 63; GC II: Art. 62, GC III: Art. 142; GC IV: Art. 158) and Additional Protocols (AP I: Art. 1; AP II: preamble). Art. 1(2) of AP I refers to that In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience’.

There is a legal dilemma concerning the interpretation of this clause. It is therefore subject to the variability of understandings, by and large, interpretation can fall into three types. The powerful States have typically favoured the most restricted interpretation under which the clause indicates that customary international laws will remain applicable after the adoption of treaty rules. Sundry scholars are holding more moderate interpretation that the clause can help to interpret other treaty provisions but cannot impose specific prohibitions.

The significant number of scholars are supporting the wide-ranging interpretation that the principles of humanity and the dictates of the public conscience provides a minimal threshold for judging any means and methods of warfare which are not expressly prohibited under IHL. It also provides a fundamental principle that something which is not explicitly prohibited by a treaty is not ipso facto permitted.

The rapid developments in AI and autonomous weapons system may remove the human from targeting and decision making in the battlefield and effectively substituted by an autonomous machine. However, there remain moral arguments regarding the capability of the machine to decide about human life or death. This issue can be solved by imposing some degree of human control over AI weapons.

The writer is an LLM Candidate, South Asian University, New Delhi, India.

Comments