DOD releases first AI ethics principles, but there’s work left to on implementation

The Defense Department released its first official artificial intelligence ethics principles that it says will shape how it develops and implements the technology.

While no timeline was given on implementation guidance, Shanahan said there were no delays on developing capabilities and the JAIC is making progress on its cyber defense and warfighter health mission initiatives.
artificial intelligence (ktsdesign/Shutterstock.com)

The Defense Department has officially adopted a set of principles to ensure ethical artificial intelligence adoption, but much work is needed on the implementation front, senior DOD tech officials told reporters Feb. 24.

The five principles [see sidebar], which are based on the recommendations of the Defense Innovation Board's 15-month study on the matter, represent a first step and generalized intentions around AI use and adoption including being responsible, equitable, traceable, reliable, and governable. DOD released the principles during a news briefing Feb. 24.

Those AI ethical guidelines will likely be woven into a little bit of everything, like cyber, from data collection to testing, DOD CIO Dana Deasy told reporters.

"We need to be very thoughtful about where that data is coming from, what was the genesis of that data, how was that data previously being used and you can end up in a state of [unintentional] bias and therefore create an algorithmic outcome that is different than what you're actually intending," Deasy said.

The announcement comes a year after DOD released its AI strategy and after years of public protest from tech workers against lethal AI and autonomous weapons systems. Lt. Gen. Jack Shanahan, the head of DOD's Joint Artificial Intelligence Center, has previously said there were "grave misconceptions" about DOD's intentions and technological ability and vowed to bring on an AI ethicist to help shape strategy.

The officials underscored that DOD would not field capabilities that did not meet the principles but also admitted that defining responsible AI is still needed as well as ongoing discussions and exercises help shape "who is held responsible" from software development to fielding.

More specific guidance is needed. Deasy said the committee will develop more principles on how to bring in data, develop solutions and building and testing algorithms and training operators on what to look for with unintended effects. Each of the services and combatant commands would be part of this.

DOD AI ethics principles

Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.

Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.

Traceable. The Department's AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.

Reliable. The Department's AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.

Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Those implementation guidelines will come out of the AI Executive Steering Group, which has a subgroup dedicated to implementation, the officials said. (The officials would not name who was leading the implementation plan or who was in the steering group.)

The group will also work on procurement guidance, technological safeguards, organizational controls, risk mitigation strategies, and training measures.

"These are proactive and deliberate actions" that form the foundation for practitioners but are malleable enough to adapt as tech evolves, Shanahan said.

Shanahan said DOD was also looking to include "non-obligatory language in contracts" that would ask companies how they planned to abide by the principles when building algorithms and tools -- but that doesn't mean enforcement.

"I'm not suggesting enforcement at the beginning of it," he said. "These are early conversations to be had with our industry partners to say now that we've established these principles for AI ethics, could you develop the capabilities that address each of the five at some point along the way through [research, development, testing and evaluation]."