To date, the debate has mostly focused on three issues: How far off are we from developing advanced autonomous weapons? Could such technologies be made to comport with international humanitarian law? Could a ban be effective if some nations do not comply?
On the first issue, the open letter reveals the stunning fact that many technologists believe the robot revolution is “feasible within years, not decades, and the stakes are high.”
Of course, this is largely speculative and the actual timeline is surely longer once one layers on top of the technology the requirements of the second issue, that killer robots must comport with international humanitarian law. That is, machine systems operating without human intervention must be able to: successfully discriminate between combatants and non-combatants in the moment of conflict; morally assess every possible conflict in order to justify whether a particular use of force is proportional; and comprehend and assess military operations sufficiently well to be able to decide whether the use of force on a particular occasion is of military necessity.
To date, there is no obvious solution to these non-trivial technological challenges.
However, in my view, it is the stance taken on the third issue — whether it would be efficacious to ban killer robots in any event — that makes this open letter profound. This is what made me want to sign the letter.
Although engaged citizens sign petitions everyday, it is not often that captains of industry, scientists and technologists call for prohibitions on innovation of any sort — let alone an outright ban. The ban is an important signifier. Even if it is self-serving insofar as it seeks to avoid “creating a major public backlash against AI that curtails its future societal benefits,” by recognizing that starting a military AI arms race is a bad idea, the letter quietly reframes the policy question of whether to ban killer robots on grounds of morality rather than efficacy. This is crucial, as it provokes a fundamental reconceptualization of the many strategic arguments that have been made for and against autonomous weapons.
When one considers the matter from the standpoint of morality rather than efficacy, it is no longer good enough to say, as careful thinkers like Evan Ackerman have said, that “no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.”
We know that. But that is not the point.
Delegating life-or-death decisions to machines crosses a fundamental moral line — no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage killer robots goes to the core of our humanity.
The Supreme Court of Canada has had occasion to consider the role of efficacy in determining whether to uphold a ban in other contexts. I concur with Justice Charles Gonthier, who astutely opined:
“(T)he actual effect of bans … is increasingly negligible given technological advances which make the bans difficult to enforce. With all due respect, it is wrong to simply throw up our hands in the face of such difficulties. These difficulties simply demonstrate that we live in a rapidly changing global community where regulation in the public interest has not always been able to keep pace with change. Current national and international regulation may be inadequate, but fundamental principles have not changed nor have the value and appropriateness of taking preventive measures in highly exceptional cases.”
Killer robots are a highly exceptional case.
Rather than asking whether we want to be part of the steamroller or part of the road, the open letter challenges our research communities to pave alternative pathways. As the letter states: “AI has great potential to benefit humanity in many ways, and … the goal of the field should be to do so.”
In my view, perhaps the chief virtue of the open letter is its implicit recognition that scientific wisdom posits limits. This is something Einstein learned the hard way, prompting his subsequent humanitarian efforts with the Emergency Committee of Atomic Scientists. Another important scientist, Carl Sagan, articulated this insight with stunning, poetic clarity:
“It might be a familiar progression, transpiring on many worlds – a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.”
Recognizing the ethical wisdom of setting limits and living up to demands the of morality is difficult enough. Figuring out the practical means necessary to entrench those limits will be even tougher. But it is our obligation to try.