Autonomous Attack—Opportunity or Spectre?
© T.M.C. Asser Press and the authors 2015
Terry D. Gill, Robin Geiß, Robert Heinsch, Tim McCormack, Christophe Paulussen and Jessica Dorsey (eds.)Yearbook of International Humanitarian Law 2013Yearbook of International Humanitarian Law1610.1007/978-94-6265-038-1_44. Autonomous Attack—Opportunity or Spectre?
Abstract
This article tackles the tricky legal issues associated with autonomy and automation in attack. Having clarified the meanings of these notions, the implications of the rules of weapons law for such technologies are assessed. More challenging issues seem, however, to be raised by the law of targeting, and in particular by the evaluative assessments that are required of attackers, for example in relation to the precautions in attack prescribed by Additional Protocol I. How these rules can sensibly be applied when machines are undertaking such decision-making is therefore addressed. Human Rights Watch has called for a comprehensive ban on autonomous attack technologies and the appropriateness of such a proposal at the present stage of technological development is therefore assessed. The article then seeks to draw conclusions.
Keywords
AutonomousAutomationDistinctionWeapons lawTargetingPrecautionsWilliam Henry Boothby, Air Commodore (Retired)
4.1 Introduction
The horrors that warfare can impose on soldiers, fighters and on hapless civilians unwillingly caught up in the fight was played out in the 1940s throughout Europe, North Africa, in the Atlantic and Pacific Oceans and Far East, in the 1960s and 1970s in Indo-China, in the 1990s in the former Yugoslavia, in more recent times in Libya and Syria and in all of these years and many more besides in countless other parts of the world. As carefully worded legal rules strive more prescriptively to protect those who try to keep out of the fight, it is those very individuals who form an ever-increasing proportion of the casualties. War in which men use weapons against one another is, truly, a miserable affair.
Against this depressing reality, in this article we consider whether increasing levels of automation in attack and futuristic notions of autonomous attack decision-making represent an opportunity to be grasped with both hands, or yet another unwanted scientific advance that promises even less discrimination in the ‘killing game’.
Autonomy and automation are of course discrete notions, but we must be clear as to the meaning to be ascribed to each. Section 4.2 of this article will therefore assess what a clear taxonomy might look like. International law prohibits the use of some weapons, means and methods of warfare in armed conflict and imposes restrictions on the use of other weapons. In Sect. 4.3 we will consider the weapons law rules of most evident relevance to autonomous attack technologies. A distinct branch of the law of armed conflict, the law of targeting, regulates how weapons may be used. In Sect. 4.4, we will discuss what targeting law rules are likely to have the greatest relevance to autonomous attack technologies. The notion of autonomous attack decision-making has provoked controversy, and Human Right Watch has proposed a comprehensive ban on such technologies. In Sect. 4.5 we will ponder whether a comprehensive ban is merited at this stage. In Sect. 4.6 we will conclude by proposing a way ahead.
4.2 What Do We Mean?
The 2002 attack in which the US targeted Qaed Senyan al-Harthi in Yemen by means of a Predator remotely piloted aircraft equipped with a Hellfire missile proved the concept of aerial attack in the modern era using remotely piloted aircraft.1 Applied to this use of the air environment, automation involves the mechanisation of the platform’s decisions to a degree falling short of autonomy. It would, however, be wrong to think of these matters exclusively in terms of the air environment, as automation and autonomy are equally applicable in maritime warfare,2 on land, in outer space and indeed in cyberspace. Nevertheless, elements of the discussion in the present article will be by reference to air platforms. But what do automation and autonomy mean? We start by considering what doctrine has to say.
An automated system has been described as one that, responding to inputs from one or more sensors, is programmed logically to follow a pre-defined set of rules to provide an outcome.3 If you know the rules under which it operates, you can predict that outcome. An automated system functions in a self-contained manner once deployed, and will independently verify or detect a particular type of target object and then fire or detonate the munitions.4 Automated weapons are nothing new; consider for example certain kinds of mine or booby-trap.5
Autonomous systems, by contrast, take the matter several stages further. They employ an understanding of higher-level intent and direction and an awareness of their environment to take appropriate action and thereby bring about a desired state. Critical to autonomy is the system’s ability to decide a course of action from alternatives without depending on human oversight and control. Its overall activity is predictable but individual actions may not be.6 The system operates independently; the software identifies and engages targets without being programmed to target a specific object. Importantly, the International Committee of the Red Cross (ICRC) has suggested that development of a truly autonomous weapon system that can implement international humanitarian law represents a monumental programming challenge that may well prove impossible.7
No doubt the increasing role of automation in everyday life and consequent familiarity with it will influence its perceived acceptability in warfare, and logic suggests that peacetime technologies that are clearly adaptable for use in military environments will increasingly be seen on the battlefield.8 This will be attributable to their appeal, which in turn will be derived, for example, from their convenience of use, from their tendency to limit the dangers faced by friendly force personnel and from their ability to enable timely responses to rapidly appearing, potentially overwhelming threats. The future battlespace will involve threats that require rapid and flexible responses at speeds that presuppose automated decision-making; other advantages claimed for these new technologies include saving lives; fearlessness; remembering orders; absence of emotional responses; suitability for dull, dangerous and dirty tasks; needing no or less rest; having shareable intelligence and computation speed.9 It follows from this that the weapons discussed in this article are likely to be seen as having such utility that their further development and eventual procurement will likely be regarded as critical to operational success.
Autonomous weapons can loiter, seek, identify and engage targets and can report the point of weapon impact. The Wide Area Search Autonomous Attack Miniature Munition, for example, is a small cruise missile with a loiter capability that can seek a specific target and that, on acquisition, attacks or requests permission to do so. The autonomous element of the weapon is, it seems, posing significant engineering issues, for example because it is likely beyond current technology for the machine to make the complicated assessments required to determine whether or not a particular attack would be lawful if there is an expectation of collateral damage.10
Burrowing down a little deeper, “so long as it can be shown that the system logically follows a set of rules or instructions and is not capable of human levels of situational understanding, then they should only be considered automated.”11 If that is doctrine’s interpretation, Peter Asaro describes as autonomous “any system that is capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human involvement in lethal decision-making,” a formulation which could include systems current doctrine sees as ‘automated’. So working out a widely accepted taxonomy is a clear priority for future work.12
For example, under national doctrine, an unmanned aircraft would be part of an automated system if, first, it is pre-programmed either to proceed to a set location and there to fire a weapon or if, second, having reached a pre-set location, it searches a defined area of territory for specified objects which, when detected, it recognises using on-board image recognition technology, and attacks. If, third, a similarly equipped unmanned platform were additionally programmed to make evaluations or decisions or to undertake certain procedures before it then decides whether and how to undertake the attack, this would seem to render it an autonomous as opposed to an automated system. So the doctrinal distinction seems to lie in genuinely mechanical decision-making processes that go beyond simple recognition resulting in automatic attack.13
An apparently different understanding is reflected in a recent United States Department of Defense Directive14 which describes a weapon system that “can select and engage targets without further intervention by a human operator” as autonomous. It would seem clear that this would characterise the second and third examples discussed in the previous paragraph as autonomous systems.
The US definition would seem to focus the discussion on all technologies that cause the machine to select the target for an attack without human involvement at that moment. It seems to matter little to this definition that recognition technology is used to decide that an observed object or person is a target. The point deserves repeating that if we are to have a sensible debate on these matters, generally accepted terminology is essential. For the purposes of the following discussion, autonomy and automation will be understood in accordance with the JDN 2/11 definitions noted earlier in this Section. Legal issues are likely to arise, however, in relation to both autonomous and certain automated attack technologies, so in the following discussion the reader will frequently see reference to both.
4.3 Feasible Limits What Is Expected Regarding Weapons with a Defensive Posture
4.3.1 Weapons Law, Automation and Autonomy
The basic principle is that “[i]n any armed conflict, the right of the parties to the conflict to choose methods or means of warfare is not unlimited.”15 Certain legal principles applying to weaponry flow from this. The first, cardinal principle that is customary and thus binds all states prohibits the employment of “weapons, projectiles and materials and methods of warfare [that are] of a nature to cause superfluous injury or unnecessary suffering.”16 It would, however, seem unlikely that the automated or autonomous nature of the target selection mechanism will directly contribute to the degree of suffering or injury that the weapon system causes. That is most likely to be affected by the munition, the missile, bomb or other device, that is actually being delivered to the target. It is not therefore necessary to discuss the principle further.
The second fundamental weapons law principle prohibits weapons, means or methods of warfare that are indiscriminate by nature. The treaty formulation of the rule that is so widely accepted as to be customary and thus universally binding includes as prohibited indiscriminate attacks: “(b) those which employ a method or means of combat which cannot be directed at a specific military objective; or (c) those which employ a method or means of combat the effects of which cannot be limited as required by th[e] Protocol; and [which], consequently, in each such case, are of a nature to strike military objectives and civilians or civilian objects without distinction.”17
This rule will have great importance where automated and autonomous attack technologies are concerned. The likelihood, however, is that the recognition technology will be specifically designed to try to ensure that the weapon engages the intended kinds of object or the intended persons, which in each case have previously been assessed as constituting lawful targets. If the system reasonably achieves that intended purpose it will not be indiscriminate by nature. If, however, when tested it is found to be just as likely to attack, for example, civilian vehicles as military ones, the rule is likely to be breached by its use. It will therefore be critical to test the technology in realistic conditions of the sort likely to be encountered in the intended circumstances of use, and then to consider that test performance carefully in determining whether the rule has been complied with, that is whether the automated or autonomous technology adequately distinguishes between objects and persons it is lawful to attack and those entitled to protection.
The third composite, but not customary, rule of the law of weaponry protects the environment. It comes in two parts, the first of which prohibits States party to the UN Environmental Modification Convention (ENMOD) to “engage in military or any other hostile use of environmental modification techniques having widespread, long-lasting or severe effects as the means of destruction, damage or injury to any other State party.”18
‘Widespread’ encompasses “an area on the scale of several hundred square kilometres”, ‘long-lasting’ suggests “a period of months, or approximately a season” and ‘severe’ involves “serious or significant disruption or harm to human life, natural and economic resources or other assets.”19 Again, the automated or autonomous nature of the decision processes we are discussing in this article would not seem to have a direct impact on the environment. The rule will have greater relevance for the warhead, munition and associated damage technologies. This will also be the case in relation to the second part of the environmental protection rule which prohibits the employment of “methods or means of warfare which are intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment”20; it is therefore prohibited to “use […] methods or means of warfare which are intended or may be expected to cause such damage to the natural environment and thereby to prejudice the health or survival of the population.”21 The terms ‘widespread, long-term and severe’ do not necessarily have the same meaning as in ENMOD, and while in the ENMOD rule breach of any one of the criteria suffices to breach the rule; in the API rule all three features must be established, so the threshold for an API rule breach is that much higher.
There are no rules of international law of armed conflict that specifically address automated or autonomous attack technologies. Such technologies might of course be applied to weapons for which there are specific weapons law rules, such as anti-personnel landmines, cluster munitions, chemical or biological weapons, particular fragmentation weapons, incendiary weapons and so on. The ad hoc weapons law rule will tend, however, to address the lawfulness of the munition that is employed as opposed to the lawfulness of the automated or autonomous decision-making process.
States party to API have the obligation, “[i]n the study, development, acquisition or adoption of a new weapon, means or method of warfare […] to determine whether its employment would, in some or all circumstances, be prohibited by th[e] Protocol or by any other rule of international law applicable to the High Contracting Party.”22 It is therefore important to work out what ‘weapons’, ‘means’ and ‘methods’ of warfare entail. ‘Weapons’ are offensive capabilities that can be applied to a military object or enemy combatant.23 The current use of an object,24 the intention to use it in a particular way25 and the operational purpose that it is designed to fulfil26 are all individually capable of characterising an object as a ‘weapon,’ so the term seems to describe an offensive capability applied, or intended or designed to be applied, to a military object or enemy combatant. The damaging or injurious effect of the weapon does not need to derive from a kinetic impact,27 which leads to the conclusion that a cyber tool would be a weapon if it has the violent consequences referred to in the present paragraph were it to be used, or intended or designed to be used against a military objective or enemy combatant.28
Means of warfare are weapons, weapon systems29 or platforms employed for the purposes of attack30 and methods of warfare are activities designed adversely to affect the enemy’s military operations or military capacity.31
In an Article 36 weapon review of an automated or autonomous weapon system, the generic circumstances in which it will be used are considered. The question for the reviewing state is whether the legal rules that apply to it prohibit or restrict those intended circumstances of use. If they do, the weapon review should draw attention to those prohibitions or restrictions. States that are not party to API are, arguably, bound by a customary rule requiring that they review new weapons before fielding them.32 Neither the treaty nor the customary rule prescribes the form or procedures associated with such reviews. Depending on the circumstances, advice to a commander or full, reasoned and written advice to ministerial authorities may be called for.33
4.4 Targeting Law and Autonomy
One of the core, customary rules of the law of armed conflict requires that a distinction be constantly maintained between civilians and combatants and between civilian objects and military objectives.34 Whether autonomous attack technologies can be employed consistently with this principle will depend on the technical performance of the recognition technology. Where there is doubt as to certain matters, an attack must not proceed.35 In the case of attacks against objects, the decisive question is whether a weapon system can differentiate sufficiently between, for example, the military objects36 it is designed to recognise and civilian objects that are protected by the law. An automated or autonomous system may, for example, be programmed to examine an object it observes by reference to the particular characteristics of, say, a tank, artillery piece or armoured personnel carrier. If sufficient points of similarity are achieved, the controlling software may determine that the observed object is a military object that it is lawful for the weapon system to attack. Testing will, however, be important to evaluate the reliability with which the weapon system conducts this recognition process and, thus, its ability to comply acceptably with the distinction principle. Such testing may prove challenging and it seems likely that computer modelling will also be required.37
Even greater technical challenges confront the development of technology for the automated or autonomous attack of individuals. The principle of distinction, when applied to attacks that target persons, requires that combatants and civilians directly participating in hostilities be differentiated from civilians taking no part in the hostilities.38 Some research may focus on the mechanical observation of characteristics peculiar to combatants, such as their metallic footprint or, perhaps, aspects of their behaviour and movement.39 In relation to metallic footprint, however, nanotechnological developments in military equipment such as the manufacture of rifles using plastics and other substances may challenge the appropriateness of such an approach.