NIST researcher calls for further evaluation of the AI impact on humans

NIST Information Technology Laboratory chief of staff Elham Tabassi said Friday that the growing popularity of artificial intelligence technology requires more study of its impact and potential harm to humans.

NIST Information Technology Laboratory chief of staff Elham Tabassi said Friday that the growing popularity of artificial intelligence technology requires more study of its impact and potential harm to humans. Parradee Kietsirikul / Getty Images

Amid growing concerns that artificial intelligence systems could be misused by cybercriminals and for malicious purposes, a leading researcher said more study is needed to determine the societal impact.

As artificial intelligence chatbots and platforms become increasingly available for public use, a leading government researcher is urging the technology community to look beyond technical specifications and further study the impact AI can have on individuals and society. 

Dr. Elham Tabassi, chief of staff for the National Institute of Standards and Technology Information Technology Laboratory, said Friday at a digital event hosted by George Washington University that taking a risk-based approach to developing AI systems is critical in building public trust. 

Tabassi, who helped draft the recently-published NIST AI Risk Management Framework, said one of the top challenges AI researchers face “is to figure out how to do standards and evaluations from a socio-technical lens and approach.” 

The framework was released in January, featuring shared terminologies and taxonomies, as well as voluntary guidance on tools to build trustworthy AI and monitor and measure risks.

“A majority of the standards coming out of the [International Organization for Standardization] and all this that you see are technical specifications,” said Tabassi. “Now we want to evaluate the systems and write standards that not only address those, but also the human element, the impact on the human and the harms.”

Congress instructed NIST to develop the voluntary framework in collaboration with AI practitioners and stakeholders as part of the fiscal year 2021 National Defense Authorization Act. Its release comes amid concerns that emerging AI technologies and platforms like ChatGPT3 – an AI tool available for free public use – can be misused by cyber criminals or for malicious purposes.

The framework seeks to provide organizations with ways to govern, map, measure and manage risks associated with the development and deployment of AI systems. 

Deputy Commerce Secretary Don Graves said in a statement last month that the framework should "accelerate AI innovation" while advancing civil rights and liberties. 

"This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values," he said.

NIST said the framework was meant to serve as a flexible document that can "adapt to the AI landscape as technologies continue to develop." 

The agency also released an initial playbook to help organizations navigate the AI risk management framework. NIST requested feedback on the playbook by Feb. 27, and said a revised version will be published later in the spring based on input from stakeholders and the community.