JAIC to expand ethics team

Jane Pinelis, the JAIC's chief of testing, evaluation, and assessment, said the center is working to expand its ethics team, which is charged with ensuring the Defense Department's AI projects meet ethical standards.

AI government
 

The Pentagon's Joint Artificial Intelligence Center is looking to expand its ethics staff.

Jane Pinelis, the JAIC's chief of testing, evaluation, and assessment, said the center is working to expand its ethics team, which is charged with ensuring the Defense Department's AI projects meet ethical standards and leads a DOD working group on responsible AI and implementing and testing them.

The DOD adopted five ethics principles in February and hired Alka Patel as the JAIC's chief ethicist. The JAIC also launched a "Responsible AI Champions" pilot earlier this year to

"They've instituted a responsible AI champion's pilot to train personnel across disciplines on the ethical use of AI.

"I think T&E, test and evaluation, will play an incredibly big role in ensuring that those processes are followed. But it is a challenge for us, by the nature of ethics, those requirements are very qualitative and we have to translate them into something very objective, very quantifiable for each product," Pinelis said during the General Services Administration's Technology Transformation Services' Impact Summit Aug. 12.

That ethics consideration has its own methodology name that Pinelis referred to as "devsecethops" -- a continuous development loop that collects user, ethical, and security requirements upfront and cycles it back to the developers.

"There's a model where an algorithm is developed and we test it at that developmental level and provide feedback back to the developer," she said. "Once an algorithm is good enough, then we can move on with integration and operational testing, as well as human machine testing."

Pinelis said the JAIC is focused on automated testing of its solutions through a repeatable process that mimics commercial industry practices and starts at the developmental testing stage.

"So when we first get a model, we are able to somewhat easily iterate with a vendor. They can put the model through test and evaluation, they know what the process is. We have procured a couple of test harnesses, which we can use for this purpose, and they can simply submit their model on, get results on the backend of it, see how the model does and hopefully quickly iterate."

Pinelis said the process aligns with commercial industry, which has been helpful, because they aren't as stringent as those with the DOD requirements.

But the DOD likely won't move to more broadly incorporate AI beyond making tedious processes more efficient until ethical considerations are mastered.

"As I think about the ethical principles, I think, we're not going to be moving in that direction until we've figured out how we can quantify and test and ensure that we're able to adhere to the ethical principles that we've adopted," she said.

NEXT STORY: Extra time for JEDI redo