War without humans
Lambèr Royakkers of the Eindhoven University of Technology analyses the dangers of having machines make life-or-death decisions.
The last war in Iraq was already a war of robots: by the end of 2008, 12,000 ground robots and 7,000 drones were involved. Today, new robotic and AI applications are being developed not only to perform dull, dangerous, and dirty jobs, but also to attack and kill targets directly. At least 76 countries have entered into the extensive development of military robotics technology, including Russia, China, Pakistan and India. According to Lambèr Royakkers, an expert in ethics and technology at the Eindhoven University of Technology (TU/e), it will be necessary to establish a strict framework on its use.
Technologist: When will we see autonomous systems at war?
Lambèr Royakkers: We have noticed recently that “cubicle warriors” – ground-based pilots – are increasingly being assigned monitoring tasks rather than supervisory roles. The next step would be for the cubicle warrior to become unnecessary and for the military robot to function autonomously. The US Air Force assumes that by around 2050 it will be possible to deploy fully autonomous unmanned aerial combat vehicles. It makes financial sense: autonomous vehicles are less expensive to produce than remote-controlled robots, and since they don’t require human flight support, they also don’t incur added personnel costs.
T. What are the risks if we left machines to make decisions about human life?
LR. We have the problem that machines can’t distinguish a citizen from a soldier. Their sensors may indicate that something is a man, but they cannot determine whether it’s a civilian or a combatant. This would require them to have the ability to understand the intentions of a human being and to judge what a person is likely to do in a given situation. These qualities are all the more important given the fact that, increasingly, “asymmetric” and non-traditional warfare is carried out by a conventional army using modern technology against insurgents. Typically, the insurgents – consisting of irregular and mainly undisciplined fighters led by warlords – are often not recognisable as combatants.
T. What does a machine do better than a human in a war situation?
LR. Indeed, there are also advantages. For example, the American robo-ethicist Ronald Arkin suggests that autonomous armed military robots could prevent acts of revenge and torture. At war, soldiers are exposed to enormous stress and all of its consequences. Various scientific publications have shown that during the US-led military operation in Iraq, feelings of anger and revenge after the loss of a fellow soldier resulted in double the number of civilian abuses, and that emotions can cloud soldiers’ judgement during war.
T. Should we limit the development of autonomous weapons?
LR. Yes, I think so. Automated killing is absurd – humans should always take life-and-death decisions, not machines. Fortunately, a serious debate has already started. For example, every year the UN organises a convention on this topic. Last year, leading robotics experts – such as Elon Musk, Steve Wozniak and Stephen Hawkins – urged a ban on autonomous weapons through an open letter signed by thousands of artificial intelligence and robotics experts.
Technology is helping farmers feed the world. It can also make agriculture more environmentally friendly – for conventional and organic farmers alike.
How do you keep the skies from becoming a giant, noisy, dangerous cloud of drones? Manufacturers and regulators are working on the answers.