By Izzy Woods
Today CBS reports an extraordinary breakthrough in scientific engineering. A material so light, “...its volume is 99.99 percent air, and its density is 0.9 milligram per cubic centimeter - not including the air in or between its tubes. That density is less than one-thousandth that of water”. Extraordinary photographs accompany the report, showing the material resting on the head of a dandelion. The scientists who invented this material were working for HRL Laboratories and the Composites Center at the University of Southern California. Who funded the research? United States' Defense Advanced Research Projects Agency (DARPA).
It is no surprise that the US government fund technological research to aid in the advancement of the war machine. Every government in the world does so, and scientific advances have undoubtedly led to lives being saved in the battlefield, as well as lost. But with computers predicted to equal the power of the human brain by the 2020s, is sufficient time being given to public debate about the ethics of technology in warfare? At what point will operational decisions be handed over to machines? Is the public in danger of finding themselves outflanked by the speed of scientific advance before they have properly considered the moral and ethical consequences of it? While we are joyfully browsing late ski deals, and deciding what the make of our next car will be, are there ethical issues that should perhaps be more at the forefront of our minds?
Robotics is one area of research the military are very interested in at present, with the US government investing over $4 billion in research into robotics, known by them as ‘autonomous systems’. The advantages of having a Terminator style warrior who will show no fear in the face of the enemy is a moving from sci-fi to reality. One only has to look at the BigDog robot created by Boston Dynamics to appreciate the potential. But the government aware of the ethical dimension to this scenario, and have consulted Colin Allen, scientific philosopher and robotics expert at Indiana University, to advise them on whether a robot soldier can be built that could be programmed not to violate the terms of the Geneva Convention. Many human combatants do just that of course, due to the extreme stress of the battlefield, which leads to a desire for retribution on the captured enemy, for example, and subsequent violation of international law. A computer scientist at Georgia Tech university, Ronald Arkin, who is currently working for the US military has recently concluded that robots are “more likely to perform ethically” than human beings in a warzone, simply because they are not governed by fear or emotion.
Drones are already used to attack military targets in Iraq and Afghanistan, and robots have been used to help in bomb disposal. This is the current state of the art, and drones have already thrown up some ethical dilemmas, in the same area as the philosophical trolley problem. These machines are operated remotely by human beings and currently are not capable of making operational decisions. There will come a point when it is possible that military machines will be capable of acting autonomously and making operational decisions which will result in human death and that point is not far away. Computers already decide on a courses of action for us where there is no moral dimension to the decision, of course. How long before that capacity is shifted over to the battlefield?
The expense of robots such as drones has meant that there is no question of them operating independently at present, due to the financial impact of their loss in case of error. But before long mass production of dispensable battle-ready robotic soldiers will be a reality.
Scientists are already working on software to train machine combatants to identify tanks and enemy soldiers, and to differentiate between these ‘legitimate’ targets and non-threatening targets such as ambulances and unarmed civilians. This is a tough call for human beings with all their faculties, and is so context dependent that one has to wonder where it will all lead. If a child is identified as a non-threatening target, might they simply become the ideal means of delivering bombs right to the doorstep of the enemy? As has been seen in recent conflicts, when the enemy is ruthless enough there is no such thing as a ‘civilian’, and anticipated moral norms about using women and children in the battlefield simply will not apply. Once we start tampering with morality on the battlefield all bets are off, no rules will apply and the Geneva Convention may as well not be programmed into a machine at all. Human combatants will simply work round the weaknesses that are built into such a machine in order to defeat it. It’s a chilling thought.
Highly complex battlefield situations have to be judged by those on the ground, trained partly through experience, and governed by human morality. Decisions are split second, and quite often wrong. But once we hand over decisions to machines, and abdicate our responsibility for those decisions we may win the battle, but we will have lost the peace of mind that currently prevails – that man, not machine is in charge of our destiny. Perhaps it is time for this debate to be made more public before science fiction becomes science fact.
Colin Allen, Moral Machines: Teaching Robots Right From Wrong, Oxford University Press, USA, Website exploring ethics of robots http://moralmachines.blogspot.
P. W Singer, Wired for War: The Robotics Revolutions and Conflict in the 21st Century
Stephen Graham, America’s Robot Army, http://www.newstatesman.com/
Ronald C Arkin, Governing Lethal Behaviour in Autonomous Robots, Chapman and Hall/CRC; 1 edition
Super lightweight material http://i.i.com.com/cnwk.1d/i/
Our IP Address: