Do we need robotic technology rules of engagement?

In a recent post I discussed a talk by Dr Peter Singer about robotics and 21st century warfare.  This use of technology raises some very big moral dilemmas, especially in the area of law, rules of engagement, and the personal effects of this kind of warfare on both combatants and civilians.

For instance there is no current agreement on which body of law would govern the use of robotic devices in war.  Who is to blame for any errors?  Is it the operator, who is potentially sitting a continent away?  What if there is a software glitch?  What happens when the device cannot determine the difference between a child or an old person and an enemy human target?  What is the machine equivalent to manslaughter?

These are not trivial questions and, rather than developing complex legislation akin to the Income Tax Act, do we need a Star Trek like ‘prime directive’?  This is not science fiction, it is not the future of war – it is already here and operative today.

As Singer said “the fog of war is not lifting, we are still seeing mistakes”.  We need to figure out accountability for “un-manslaughter” – he used the example of a drone problem that killed 9 soldiers in training exercise. Armed autonomous systems are becoming commonplace and some big questions remain unanswered. What about war crimes? What about errors? How can the machines distinguish between innocents & combatants?  None of these questions mattered when we were just using robots to build cars, but now they do matter.

As Matthew rightly points out in a comment on my previous post, the creation of increasing distance between the killer and the victim makes killing much easier.  Killing with a knife or bayonet is so much more personal that pressing a button to dispatch a device to destroy a school (which may or may not have schoolchildren in it at the time) that is being used to house munitions for insurgents.

This continued depersonalisation of killing makes it easier and easier to undertake offensive action.  If your own soldiers can sit safely in an office at home and simply use the equivalent of a computer game to attack, then the social and political cost is very low.  But what is the personal cost to the soldiers involved?  Is  killing any less the delivery of death if you do it at a distance?  Is going home to eat dinner with the family after dealing death all day at the office even more stressful than being on the ground in a combat zone?

We’re entering some uncharted territory with this technology in terms of its impact on society and upon our warriors, not to mention upon our enemies and upon civilians. As nation states continue to act against non-state actors (such as various insurgent groups or “terrorists”) the dilemma of what happens to innocent civilians will become even more problematic as the element of human judgement and compassion on the ground is removed from the equation.

Further, we do not know if the use of this technology will simply shift enemy action away from their own territory and to our own territory. This last is entirely possible because, if they can no longer kill our soldiers, how else will they seek to make their point and cause us damage?

Some other interesting articles about robots:
Robots Take To The Stairs – This Is Just The Beginning
Robot sub aims for deepest ocean
How to Make (Robot) Friends and Influence People