Drones are everywhere. Not necessarily in the skies above our heads but in the news, in discussions around the office, and front and center in the national consciousness.
Until recently, these conversations had largely been about the use of battlefield drones. But that conversation’s about to change because of the Department of Defense’s development of autonomous weapon systems — drones no longer tethered to a human operator but designed to make their own life-and-death decisions.
Designing machines for autonomous operations is useful and expedient. The fact that a Roomba can clean the floor without your involvement is definitely appealing. But machine autonomy also has a dark side, vividly illustrated in science fiction.
Although exaggerated for dramatic effect, the basic questions raised by these techno-myths already apply to contemporary technology: How much autonomy should we design into these systems? How reliable are machine-generated decisions? Should we count on them? And if something goes wrong, who or what is culpable when decision-making and real-world action is no longer under human direction and control?
Responding to these questions will require the efforts of not only engineers and roboticists but also philosophers, sociologists, legal scholars, policy experts, and informed citizens.
What’s most important now is to begin having these conversations. Autonomous drones are no longer science fiction — they are here now. And we have a unique opportunity to decide how and whether to deploy them.
I’m David Gunkel, and that’s my perspective.