Photo: Portrait of author Paul Scharre

Policy wonk and Colby-Award-winning author Paul Scharre on the future of autonomous weapons


Q: Based on your research for your book Army of None, how do you see the present moment?

We’re at a time where artificial intelligence technology is coming out of research labs and into the real world, and it poses serious questions for how we use this technology in a whole array of fields—in finance and medicine and also in war. This is an is¬sue that affects all of us. Whatever weapons systems that nations build, we are all going to live inside the world that these weapons inhabit. We’ll have a stake in what that future looks like.

Q: So what does the future look like from your point of view?

Well, it’s clear that we’re on a path towards greater autonomy in weapons systems. Just like we’re seeing with self-driving cars, we’re seeing with each generation of military robotic systems more autonomy over time. I think what’s still unclear is how far militaries are going to take that and whether they’re going to cross the line to what you might call fully autonomous weapons that are actually making their own targeting decisions on the battlefield. These would still be weapons that are built, programmed, and launched by humans. We’re not talking about robots building robots and Terminator, or something like that. But fully autonomous weapons that are making their own decisions about which targets to attack in the battlefield would nevertheless fundamentally change humans’ relation¬ship with violence in the world.

It raises some really challenging legal and ethical and strategic issues that I think are certainly worth considering. We might want to use artificial intelligence technology to make weapons more precise—to try to reduce civilian casualties to make weapons more discriminate. At the same time, there are lots of things that humans can do in war that machines cannot do: bringing judgment to bear to understand the situation, balancing ethical dilemmas, understanding the broader context for an operation. There are lots of historical examples where humans make these kinds of decisions that we know machines would not be able to do very well. So even as we find ways to use this technology to improve military operations, to make warfare more precise and humane, we don’t want to lose our humanity in the process. We need to keep humans involved at the role of legal responsibility and ethical decision-making in war.

Q: What’s the potential downside? How badly could things go?

The essence of autonomy is delegating a task to a machine, whether that’s a self-driving car, or an autonomous weapon, or a thermostat. The risk is that the machine makes the wrong decision. For some decisions, like a thermostat, the consequence may be pretty low. You come home from vacation and the house isn’t the temperature you want and it’s an inconvenience, but you’re all right. For things like weapons that are making lethal decisions, of course, the consequences could be serious. An autonomous weapon attacking the wrong target could be something that results in civilian casualties or fratricide—or even attacking the enemy but in the wrong time and place, which could lead to unintended escalation in the conflict. It could be a valid enemy military force, but you’re not yet at war. That’s a real problem.

Humans make mistakes in war. Humans are far from perfect. Humans make mistakes, and there are accidents in war today that result in fratricide, unintended escalation, and civilian casualties. And, of course, humans also do things deliberately wrong, like commit war crimes, unfortunately. One of the things that’s different about machines is the way they fail could be quite different. Human failures tend to be relatively idiosyncratic. A different person in the same situation might make a different decision. Autonomous systems open up the potential for mass failures, failures that scale and that could really lead to catastrophic accidents. It’s a very different way to think about reliability for weapons systems and the potential scale of these accidents. I think it’s one that militaries themselves have yet to truly absorb. When they think about the reliability of autonomous weapons, it’s often in the context of one weapon system making one decision.

But if there is a failure, if the environment turns out to not be the environment that it was trained for, or if an enemy hacks or manipulates the system somehow, the potential is for the failure of an entire fleet of weapons. And at scales that could lead to much more catastrophic destruction. That, I think, is a real and significant challenge, when you think about how to use this kind of technology.

Q: Are there things we should be thinking about or doing?

Yeah. I think there’s this two sets of challenges when it comes to autonomous weapons. One is a concern that they’re just not going to make the right decision. It’s fundamentally about reliability. Reliability is really hard in this environment, because war’s not very common, which is a good thing. It also means that we don’t have a lot of good data on what wartime environments look like. Wars are always different and changing and unique. And, of course, it’s an adversarial environment. So the enemy is trying to change the rules of the game. It’s a real problem for machines. Machine intelligence today can outperform humans in very narrow areas when the task is very specific, when the environment is controlled. But in uncontrolled adversarial environments, they often do very poorly. So it’s a real hurdle to get to the point where you might get reliable performance in a way that we’re comfortable with. In principle that is a technical problem that will be solvable over time.

The real danger here is that there is a competitive pressure between countries to deploy AI-enabled weapons quickly on the battlefield to get an advantage over others. And that leads countries to cut corners on safety. That’s a real genuine concern that even if countries themselves might say, “Well, you know, this isn’t quite ready for prime time. Let’s take our time. Let’s test this.” All of a sudden, when you introduce this dynamic of trying to beat a competitor to market, to get that out there before the other guy does, that can create some real perverse incentives for people to cut corners on safety, and maybe deploy things that aren’t totally ready.

This is particularly hard in the military environment, because you don’t necessarily get to test them very well. For, say, self-driving cars, how reliable is this system? We can take it out on the roads, you could drive around, you can see if it gets in accidents or not. In the military, that doesn’t exist. There are training environments, ranges that are simulated environments, maybe the computer simulations. But these are only approximations of war. There’s much more uncertainty actually about how reliable it is.

Q: What might the future hold, then?

Some scholars have theorized about a future several decades from now where the pace of military operations enabled by automation might be faster than humans can even respond. We’ve seen this in other areas like stock trading, where there are whole domains that operate at these super¬human speeds, in milliseconds. That’s an interesting lens to look through at where we may be headed in the military environment. With electronic stock trading, you have accidents like flash crashes. It’s a little bit frightening to think about a flash war. We wouldn’t want that. It’s in no one’s interest.

So how do you guard against some of these risks? Unexpected interactions be¬tween algorithms operating at superhuman speeds that may lead to these harmful consequences. Nation states hardly want situations where their robotic systems, or their cyber weapons, which really have the potential to operate at superhuman speeds, might then begin doing things that nobody planned for. That’s a little bit frightening.

Q: What really freaks you out?

I talk in Army of None about a hypothetical scenario in the future with intelligent and adaptive malware, the combination of AI and autonomy and cyberspace. The reality today is that we have already seen a very rapid evolution in malware. Viruses and worms by definition already have a lot of built-in autonomy. They’re operating at scale, replicating across the Internet, and executing their attacks. We’ve already seen cyber weapons, like Stuxnet, with the ability to create physical attacks. This technology has the ability to operate at superhuman speed, at superhuman scale, which poses a particular challenge.

So far, the adaptation and evolution of cyber systems has moved at human speed. But technology is changing that. Today, it is possible to build computer programs that autonomously find their own vulnerabilities. This is different from a human-engineered virus or worm that exploits a certain bug, or a vulnerability, or a zero-day exploit. Now it’s possible to build systems that go out and find cyber vulnerabilities and then spread. We’ve yet to see this really employed. But the barriers to entry for many of these new cyber tools are much lower than you would like. I find that quite troubling.

The nightmare scenarios that I outlined in Army of None were a little bit hypothetical at the time, connecting a couple of dots. Since the book’s been published, those dots are becoming even more connected in the real world. That’s the thing that keeps me up at night. While many of us of a certain age remember a world before the Internet, when everything worked just fine without it, that’s not the case today. If you knock out the Internet today, now food doesn’t arrive in grocery stores. The electrical grid doesn’t function. Gas station pumps don’t work. People can’t go get the gas they need to power their generators, because the gas pumps don’t work. People can’t go to the store to buy things, because they use credit cards and don’t have cash anymore.

We’ve moved toward this digital, Inter¬net-enabled world that is very efficient. But in doing so, we have, as a society, stripped away all of the older methods of operating. There’s not the same degree of resiliency in the system as you might like if you thought that that network and digital infrastructure was actually quite vulnerable. The issue is we don’t know how vulnerable or how resilient it is. That’s quite scary when you think about the potential for cyberattacks. The cyber world today favors offense over defense by a considerable margin—it’s much easier to carry out attacks than to defend against them. As a nation, I don’t think we’ve invested enough into thinking about societal resiliency against these kinds of large-scale disruptions.

Interview condensed and edited for length and clarity.

Norwich University admits students of any race, color, national and ethnic origin to all the rights, privileges, programs, and activities generally accorded or made available to students at the school. It does not discriminate on the basis of race, color, national and ethnic origin in administration of its educational policies, admissions policies, scholarship and loan programs, and athletic and other school-administered programs.

Norwich University collects personal data about visitors to our website in order to improve the user experience and provide visitors with personalized information about our programs and services. By using our site, you acknowledge that you accept the information policies and practices outlined in our Privacy Policy.