Permissible Arms

RROE (Robotic Rules of Engagement)

Posted in us military by Karaka on 9 May 2010

I hadn’t seen this around the usual sources (perhaps I just missed it), but this month’s Discover Magazine did a special on robotics, with an article devoted to robotic warfare. It wasn’t available online, so I scanned it for those that might be interested: Mark Anderson – The Terminators (PDF). (Also, I really wanted to call this post “These Aren’t the Drones You’re Looking For.” Admire my restraint.)

The first quarter of the article is devoted to anecdotal experience combined with hard facts about the US military’s drone program, though the author acknowledges that it would be going too far to call the drones true robots (which, as I understand it, means the ability to interact with the physical world independently from human control). But the majority of the article deals with the conflict between autonomous robotic weaponry and ethical restrictions on robotic behaviour.

The current proof-of-concept version of [Ronald] Arkin’s ethical governor assumes that discrimination has been programmed into the machine’s software. The robot is permitted to fire only within human-designated kill zones, where it must minimize damage to civilian structures and other regions exempted by human operators. The program then weighs numerically what soldiers in battle weigh qualitatively. One part of the code, for instance, makes basic calculations about how much force should be used and under what circumstances. What Arkin calls the “proportionality algorithm” mathematically compares the military importance of a target (ranked on a scale of 0-5–a number that a human must provide) with the capability of every weapon the robot has and with every nearby place from which the robot can fire that weapon. In some situations, the robot might have to reposition before it fires to ensure minimal collateral damage. In others the target might be ensconced within a school or crowd of civilians, creating a situation in which the algorithm prohibits firing under any circumstances at all.

Taking into account that this is only proof-of-concept, and not actually prototyped, allow me to make the following observations:

  1. Such judgments would require staggering amounts of intel, intel that would require human analysis, on any zone such a machine would operate within. Even if an additional robot, or one of the existing drones, did runs on a given area to provide footage, it would still have to be reviewed and interpreted. Particularly in conflict areas with poor communication or humint, those devices would only be as useful–and effective–as the intelligence its human operators have. (And begs the inevitable question: what happens if those intelligence operations are insufficient or incomplete? The risk of such a device seems proportional to knowledge of the AO.)
  2. There are purported advantages to removing personnel from the site of conflict. Fewer deaths, of course, and fewer injuries. Less loss of expensive equipment, perhaps less loss of ground or captured assets. But it seems other things are lost if we withdraw personnel–not in the least some measure credibility. (This is germane to Pakistan even today.)  If we can send in a robot to deal with armed conflict without great human cost, is there enough disincentive to keep from employing such devices as deterrence to insure a balance of power rather than as a weapon in an acknowledged conflict? What I mean to say is: can we avoid being the police force of the third world if the use of such devices might prevent conflict from fomenting? Does the efficiency of using an unmanned automated robot armament circumvent the reporting of human eyes on a conflict?
  3. This is a philosophical can of worms I am only going to touch on but not spend a lot of time on, much to my chagrin. But: how do you put a qualitative, numerical value on a human life? I’m not even talking about a target–we already put monetary value on such lives, or information that leads to their capture. But say Target X appears 200 meters from a crowd of people. Is the weighted, numerical value of Target X worth potential injury to a person in the crowd? I am not convinced that such a calculation can be left to an automated device operating discretely in contested space.

Despite these thoughts, I’m not actually single-mindedly against robot armaments. Robotics is a fascinating field, especially for a lifetime hard sci-fi nerd like me. But I have instinctive reserve when it comes to true automation in weaponry. (It’s possible I watched War Games too many times as a kid.) Clausewitz’s adage continually comes to mind: War is fought by human beings. But if humans aren’t fighting the wars, is it altogether something else?

Noel Sharkey, also quoted extensively in the article, says, “Nobody’s making an autonomous bomb-finding robot…At the moment, the machinery for doing that is too large to fit on a robot. But that’s where the money should go.” Somehow that seems more reasonable than a green light on Arkin’s “artificial ethics”–which gives me great pause–which results in death rather than life.

Advertisements

2 Responses

Subscribe to comments with RSS.

  1. […] media by Karaka on 10 May 2010 For the record, seeing Iron Man 2 only cements my position on robot armaments. Tagged with: iron man, […]

  2. The Tao of Toasters « Karaka Pend said, on 19 May 2010 at 21:16

    […] funny to find that the professionals have been working on something along the line of one’s own thoughts, though not particularly surprising–it’s what they’re paid to do, after all. […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: