Human Intervention Imperative For Autonomous Weapons

by June 12, 2018

We see a lot of autonomous technologies being developed in several sectors, for instance, self-driving cars. The development of autonomous weapons and machinery is on the rounds as well. The US Army is interested in building autonomous weapons since they stumbled into this military robotic revolution through Iraq & Afghanistan.

Autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.


Let’s Talk Weapons

Drones aren’t new tech anymore to anyone who is updated with the technological world. They already have applications in photography, logistic services, rescue operations, etc. But these unmanned aerial vehicles can be built with an autonomous technology to create defensive or offensive weapons. Drones especially can be handy for overhead surveillance. Imagine a drone working as a bomb disposal robot and the reduced threat it can mean to any improvised explosive devices located surreptitiously on grounds.

We have self-driving cars, why not self-driving tanks? Russia has already built an unmanned ground combat vehicle, the Uran-9. It hasn’t seen combat yet but report suggest that it will enter service by the end of 2018. It features a machine gun, missiles, 30mm cannon & remote controlled sensors. It is built with keeping a human on the loop during combat.

What about robots fighting in combat in place of humans? Imagine an army of Terminators trying to defend our borders. Would you feel more safe with this change? It would of course mean saving thousands of lives of soldiers who wouldn’t have to stand directly in the line of fire. But how much destruction can the army of robots cause? There is also concern of malfunction. After all, the code will be written by humans. Any error in the code can incur minute to colossal losses.

All in all, the rise of autonomous vehicles is inevitable. The question now only is – Can it be controlled? Or is this the end of the world as we know it?


Who Will Cross The Line Of Morality?

The US Army says that for the time being it wants humans in the loop. They are not creating a Terminator but their focus is on creating a combat robot that moves from Point A to Point B with human intervention ready on hand.

And yet tricky moral and legal questions arise when we discuss autonomous weapons. How will these weapons be trained to fight? When will they feel it is appropriate to attack? Who will they destroy and what kind of collateral damage should be anticipated? Most importantly, with so much power in hand & despite the impending laws on such combat – Who will cross the line of morality? It takes just one, others will slowly follow. In the name of defensive strategy, former President Truman of the United States of America did authorize a nuclear attack on Hiroshima. And we all are aware of the catastrophic devastation caused by the bombing. Truman’s decision was spurred due to the sudden attack on Pearl Harbor by the Japanese Navy/ Airforce and also because the US military was uncertain that they could win the war without the bombing. That’s all it took. And look what it cost. So what will it take this time, to cross the line of morality?

Even if the US Defense department is planning to keep a human in the loop, other countries might not. A code in accordance with the law of war and treaties has to be strictly implied on every autonomous artillery built for the enemy. We don’t need another mass destruction, another genocide. The United Nations has held meetings in regard with discussing autonomous weapons. 22 countries have already called for a ban on ‘Killer Robots’ in November 2017. Brazil, Uganda, Iraq, Pakistan, Argentina are some of the countries banning fully autonomous weapons systems. Germany joined this list in early 2018.

There are various degrees of human control that these weapons can be built on. Thus, a common language needs to be addressed. And as Paul Scharre, the writer of the book – Army of None: Autonomous Weapons and the Future of War– has stated, “Autonomy and intelligence are not the same thing. As systems become more intelligent, we can choose to grant them more autonomy, but we don’t have to.” It all comes down to a choice.