Thursday, March 28th | 18 Adar II 5784

Subscribe
May 17, 2013 8:20 am
1

Drones and the Human Agency of War

× [contact-form-7 404 "Not Found"]

avatar by A. Jay Adler

A drone preparing to take off.

Joshua Foust has written at Foreign Policy a misleadingly titled essay, “A Liberal Case for Drones.” I think there is such a case, but this it not it and a case for drones is not even truly the subject of the piece. The actual subject is raised very early by Foust’s question, “Could autonomous drones actually better safeguard human rights?” Not drones, but autonomous drones and their relation to human rights protections in war is the the actual subject of Foust’s considerations. Why the title misleads you will have to ask Foust and Foreign Policy. That is not my interest here. Neither is the debate in the comments to Foust’s essay about whether there truly are or are likely to be any time soon autonomous drones. My interest is in Foust’s arguments and how they mistake the human problem of war.

Foust tells us that Human Rights Watch

argues that autonomous weapons take humanity out of conflict, creating a future of immoral killing and increased hardship to civilians. HRW calls for a categorical ban on all development of lethal autonomy in robotics. HRW is also spearheading a new global campaign to forbid the development of lethal autonomy.

To narrow the focus still more, then, the issue is lethally autonomous drones. (Or weapons of any kind; the focus on drones here is purely topical.)

“Offensive systems, which actively seek out targets to kill,” Foust quotes Armin Krishnan, a political scientist at the University of Texas at El Paso, “are a different moral category.”

Foust then makes the major both practical and moral focus of his essay the relative accuracy and reliability of human versus automated agency in offensive military strikes and killing. He acknowledges moral concerns – not with drones, per se, but with lethal autonomy – but he mistakes them.

Noel Sharkey, a high-profile critic of drones and a professor of artificial intelligence and robotics at the University of Sheffield, argued forcefully that machines cannot “distinguish between civilians and combatants,” apply the Geneva Conventions, or determine proportionate use of force.

It is a curious complaint: A human being did not distinguish between civilians and combatants, apply the Geneva Convention, or determine an appropriate use of force during the infamous 2007 “Collateral Murder” incident in Iraq, when American helicopter pilots mistook a Reuters camera crew for insurgents and fired on them and a civilian van that came to offer medical assistance.

Humans get tired, they miss important information, or they just have a bad day. Without machines making any decisions to fire weapons, humans are shooting missiles into crowds of people they cannot identify in so-called signature strikes.

Thus, for Foust, the morality of lethal autonomy in weapons systems is tied essentially to accuracy and reliability.

“If a drones system is sophisticated enough, it could be less emotional, more selective, and able to provide force in a way that achieves a tactical objective with the least harm,” Liles says. “A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run.”

In other words, a lethal autonomous drone could actually result in fewer casualties and less harm to civilians.

Implied by all Foust argues is that human moral advancement in the conduct of war – a problematic, though nonetheless genuine notion acknowledged by just war, among other, theories – is exemplified by diminished numbers of casualties, especially civilian and what would amount to more effective winning. This is a seductively appealing argument on the face of it. If we must sometimes fight wars (well, really, we have to admit, it is far more often than sometimes) let us at least do it by killing as few people as possible, certainly as few women and children, in the classic formulation, and as few innocent civilians.

These are certainly goals to pursue, and the militaries of liberal democracies do most of the time pursue them. But I do not think this goal is the essence of human moral advancement in war. First, effectiveness in winning war has never been a problem. Since wars began, whenever exactly that was – two clans fighting over a cave and a fire? – most of the time one side has managed some kind of victory. Warring groups have always been effective at winning.

On the score of diminished civilian casualties, whatever increased human concern with are called the laws of war, through the mid twentieth century it can hardly be argued that humanity had achieved any form of advancement. More effectively lethal weapons produced, in fact, more killing, and more civilian death, on a scale previously unimaginable. Since the the second half of the twentieth century a pronounced characteristic of war, in the lethality of weaponry, has been that of profound technological disparity between warring parties. This has been so in all of the conflicts of the United States, of Israel over the past more than thirty years, of the Soviet Union and of Russia in Chechnya, for example. This has produced markedly lower comparative casualties on one side (not always a clear winner, as in the U.S. in Vietnam or Israel in Lebanon in 2006), though sometimes still comparatively massive casualties, even mostly civilian, as in Vietnam and the Iraq War, on the other. This disparity may be a happy development for the side with low numbers – not necessarily a winner, and not by any inherent necessity deserving of the benefit – but it cannot easily be argued that such a development is an advancement in the protection of human rights in war.

Foust touches on the heart of the matter only at the very end.

The issue of blame is the trickiest one in the autonomy debate. Rather than throwing one’s hands in the air and demanding a ban, as rights groups have done, why not simply point blame at those who employ them? If an autonomous Reaper fires at a group of civilians, then the blame should start with the policymaker who ordered it deployed and end with the programmer who encoded the rules of engagement.

This is far too facile in its moral acknowledgement and in its practical recognitions. In the latter regard, the very first product of technological autonomy will be a flight from responsibility-blame. A coder programming an autonomous offensive weapon according to approved selection criteria, under guidance of established military procedure and national law would be and should be no easy target for the assignment of moral responsibility. Such a chain of abstracted and decontextualized decisions is the very scenario of plausible deniability of responsible agency all around.

Responsible agency, the assumption of moral agency – not mere assignment of blame – is the heart of the matter. While earlier approaching the point, Foust misses it.

[T]he concern seems rooted in a moral objection to the use of machines per se: that when a machine uses force, it is somehow more horrible, less legitimate, and less ethical than when a human uses force. It isn’t a complaint fully grounded in how machines, computers, and robots actually function.

This is, indeed, essential to the more general debate over the use of drones; in the current consideration, though, the matter is not machines using force (really, being used for), but machines using force autonomously. Autonomous weaponry removes the human moral agency of killing in war, could remove it, ultimately, from war altogether. Yet if anything can redeem the essential human crime of war, enact justice in the waging of it, it is precisely the complementary human moral agency of it.

Yes, if we must wage war, kill as few people as possible; yes, if we must, kill as few innocents as possible (on both sides). But it is, as Human Rights Watch and others assert, human beings who must take on the burden of that responsibility even if they might exercise it less perfectly than machines. War is the greatest crime against life we commit. It destroys the humanity of the dead and diminishes it of the living who wage and survive it. To reduce the numbers killed by passing off the complete task of killing to machines will not redeem a greater store of our humanity in a just cause, but instead sacrifice the remainder of the humanity we sought to save. To wage war and remain fully, tragically human, we must keep our own fingers poised, we must sight, however remotely, the people we have accepted as our enemies, and we must, with full recognition of what we do, accepting ourselves the burden of what we do, choose to pull the trigger ourselves. Automating war to greater perfection will not protect our human rights; it would diminish our human being. The crime of war is human. The morality in it can only be human too.

Share this Story: Share On Facebook Share On Twitter

Let your voice be heard!

Join the Algemeiner

Algemeiner.com

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.