Silent Violence: The Dangers of Autonomous Weapons

Matthew Fu ’27 in Opinions | September 20, 2024

H.G. Wells claimed the “introduction of the longbow marked the beginning of the end for the need to look into the eyes of those we kill.” This trend away from personal warfare—away from short-range weapons fired by medieval fighters with chivalric codes—has continued. Jet bombers can wipe out entire towns with the push of a button. The atomic bombs dropped over Hiroshima and Nagasaki had caused 200,000 deaths by the end of 1945. As Wells would point out, those who ordered these detonations did not need to “look into the eyes” of a single one of the deaths they caused—to them, the 200,000 was a mere statistic. Autonomous weapons systems take this depersonalization of war one step further. As Artificial Intelligence (A.I.) develops, autonomous weapons systems gain feasibility in military usage, which is a dangerous prospect.

An autonomous weapons system (AWS) selects and engages targets without human instruction. It might destroy radar systems, operate as sentries, or eliminate target soldiers. This can greatly improve the effectiveness of military operations—a computer program, unlike a human being, does not experience stress, fear, doubt, or any other emotion that may slow it down. Additionally, an AWS can work with complex pattern recognition systems and utilize a much larger range of motion than a human. It is much cheaper to maintain than conventional foot soldiers, who need weapons, intense training, food, and payment. Governments are pouring billions of dollars into testing and refining such systems, as they are far cheaper and more dependable than human force—and the ethical burden of losing one is much less than a human life.

Why, then, do so many experts call for the abolition of such systems? First of all, international humanitarian laws ask soldiers to follow the key principles of distinction, proportionality, and precaution. We cannot be sure that a computer program can accurately distinguish between a civilian and an enemy soldier. No matter how much pattern recognition goes into such a program, it may fail to identify cultural nuances and exercise appropriate judgment. AWS may also fail to minimize collateral damage, simply because they lack the situational awareness to terminate missions when the civilian casualty rate outweighs the importance of the mission or to look for alternative approaches. Unfortunately, this does not deviate much from usual military operations, which are already rampant with human error that violates these principles. Consider accountability instead. Punishing a human who commits war crimes is straightforward. If an A.I. program kills a group of civilians, who takes responsibility? The programmer? The manufacturer? Or no one at all? This loophole may be abused by desperate governments.
More importantly, if AWS comes to dominate the military, our concept of war will be completely rewritten. War is now viewed as a last resort between countries who are unwilling to cooperate on fundamental issues. It has become a rarity, especially between economically developed countries. If they can help it, world leaders avoid throwing billions of dollars and tens of thousands of troops into battles over minor conflicts. We enjoy peace and relative economic stability because of this. If war is simply fought by cheap, self-automated weapon systems, the threshold for going to war will decrease as there is little economic cost and comparatively fewer ethical concerns, at least regarding a country’s own citizens. Military leaders will grow indifferent to the use of lethal force if they no longer have to authorize it personally, and wars will spring up everywhere. Civilian populations will face the brunt of the autonomous attacks, and mindless deployment of military power may become commonplace.

If a war must occur, one where autonomous weapons are destroyed instead of humans is preferable. Allowing these systems full autonomy, however, is a horrible idea. Consider the alternative, where an almost-autonomous system is led by a human operator who authorizes  usage of lethal force. This remains in accordance with international laws, fixes the accountability issue, and ensures that we will not become so detached to lethal force as to let mindless wars spring out across the globe. Though a world without any of the ethical ambiguities that come with AWS would be preferable, such technology seems inevitable, and this would probably be the best way to approach them.