When may a robot kill? New DOD policy tries to clarify

In this 2016 photo, Marines with the 5th Marines Regiment prepare the robotic Multi Utility Tactical Transport for testing at Marine Corps Base Camp Pendleton, Calif.

In this 2016 photo, Marines with the 5th Marines Regiment prepare the robotic Multi Utility Tactical Transport for testing at Marine Corps Base Camp Pendleton, Calif. Lance Cpl. Julien Rodarte / U.S. Marine Corps

An updated policy tweaks wording in a bid to dispel confusion.

Did you think the Pentagon had a hard rule against using lethal autonomous weapons? It doesn’t. But it does have hoops to jump through before such a weapon might be deployed—and, as of Wednesday, a revised policy intended to clear up confusion.

The biggest change in the Defense Department’s new version of its 2012 doctrine on lethal autonomous weapons is a clearer statement that it is possible to build and deploy them safely and ethically but not without a lot of oversight. 

 That’s meant to clear up the popular perception that there’s some kind of a ban on such weapons. “No such requirement appears in [the 2012 policy] DODD 3000.09, nor any other DOD policy,” wrote Greg Allen, the director of the Artificial Intelligence Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies.

What the 2012 doctrine actually says is that the military may make such weapons but only after a “senior level review process,” which no weapon has gone through yet, according to a 2019 Congressional Research Service report on the subject.

That’s led to a lot of confusion about the Defense Department’s policy on what it can and can’t build—confusion that has not been helped by military leaders and officers who insist that they are strictly prohibited from building lethal autonomous weapons. In April 2021, for example, then-Army Futures Command head Gen. John Murray said, “Where I draw the line—and this is, I think well within our current policies – if you’re talking about a lethal effect against another human, you have to have a human in that decision-making process.” But that’s not what the policy actually said. 

The updated policy establishes guidelines to make sure that autonomous and semi-autonomous weapons function the way they are supposed to and establishes a working group. 

“The directive now makes explicit the need for an autonomous weapon system, if it's approved, to be reviewed,” Michael Horowitz, the director of the emerging capabilities policy office in the Office of the Under Secretary of Defense for Policy, told reporters on Wednesday. “If it changes to a sufficient degree, that a new review would appear necessary. Or if a non-autonomous weapon system has autonomous capabilities added to it, it makes clear that it would have to go through the review process.”

Horowitz continued, “There are essentially a lot of things that were…maybe…not laid out explicitly in the original directive that may have contributed to some of the, maybe, perceptions of confusion…and we wanted to clear as much of that up as possible. By, for example, making sure that the list of exemptions was clearly a list of exemptions to the senior review process for autonomous weapon systems rather than a list of what you can or can't do.” 

CSIS’ Allen told Defense One, “NATO released the summary of its Autonomy Implementation Plan last year. That plan states that ‘NATO and Allies will responsibly harness autonomous systems.’ This 3000.09 update shows that the DoD believes that there are ways to responsibly and ethically use autonomous systems, including AI-enabled autonomous weapons systems that use lethal force. The DoD believes that there should be a high bar both procedurally and technically for such systems, but not a ban. One of the DoD’s goals in openly publishing this document is an effort to be a transparent world leader on this topic.”