JAIC to expand ethics team
- By Lauren C. Williams
- Aug 12, 2020
The Pentagon's Joint Artificial Intelligence Center is looking to expand its ethics staff.
Jane Pinelis, the JAIC's chief of testing, evaluation, and assessment, said the center is working to expand its ethics team, which is charged with ensuring the Defense Department's AI projects meet ethical standards and leads a DOD working group on responsible AI and implementing and testing them.
The DOD adopted five ethics principles in February and hired Alka Patel as the JAIC's chief ethicist. The JAIC also launched a "Responsible AI Champions" pilot earlier this year to
"They've instituted a responsible AI champion's pilot to train personnel across disciplines on the ethical use of AI.
"I think T&E, test and evaluation, will play an incredibly big role in ensuring that those processes are followed. But it is a challenge for us, by the nature of ethics, those requirements are very qualitative and we have to translate them into something very objective, very quantifiable for each product," Pinelis said during the General Services Administration's Technology Transformation Services' Impact Summit Aug. 12.
That ethics consideration has its own methodology name that Pinelis referred to as "devsecethops" -- a continuous development loop that collects user, ethical, and security requirements upfront and cycles it back to the developers.
"There's a model where an algorithm is developed and we test it at that developmental level and provide feedback back to the developer," she said. "Once an algorithm is good enough, then we can move on with integration and operational testing, as well as human machine testing."
Pinelis said the JAIC is focused on automated testing of its solutions through a repeatable process that mimics commercial industry practices and starts at the developmental testing stage.
"So when we first get a model, we are able to somewhat easily iterate with a vendor. They can put the model through test and evaluation, they know what the process is. We have procured a couple of test harnesses, which we can use for this purpose, and they can simply submit their model on, get results on the backend of it, see how the model does and hopefully quickly iterate."
Pinelis said the process aligns with commercial industry, which has been helpful, because they aren't as stringent as those with the DOD requirements.
But the DOD likely won't move to more broadly incorporate AI beyond making tedious processes more efficient until ethical considerations are mastered.
"As I think about the ethical principles, I think, we're not going to be moving in that direction until we've figured out how we can quantify and test and ensure that we're able to adhere to the ethical principles that we've adopted," she said.
Lauren C. Williams is senior editor for FCW and Defense Systems, covering defense and cybersecurity.
Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.
Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at [email protected], or follow her on Twitter @lalaurenista.
Click here for previous articles by Wiliams.