NTIA's latest policy recommendations look to improve AI accountability

Suriya Phosri/Getty Images

The new AI Accountability Policy Report from the National Telecommunications and Information Administration calls for independent audits and transparency in training data for artificial intelligence systems.

The National Telecommunications and Information Administration released a new series of artificial intelligence recommendations for both public sector system developers and government agencies to foster accountability procedures in generative AI and machine learning technologies.

The agency’s new AI Accountability Policy Report, released on Wednesday, details eight key recommendations that fall into three categories — guidance, support and regulatory requirements — for the safe usage of AI systems.

“Responsible AI innovation will bring enormous benefits, but we need accountability to unleash the full potential of AI,” said Alan Davidson, assistant secretary of commerce for communications and information and NTIA administrator. “NTIA’s AI Accountability Policy recommendations will empower businesses, regulators, and the public to hold AI developers and deployers accountable for AI risks, while allowing society to harness the benefits that AI tools offer.”

The NTIA’s first pillar, guidance, advises the federal government to work with stakeholders in creating guidelines for AI audits and auditors, improving information disclosures and applying existing liability laws to AI systems. The last component particularly applies to who is held accountable for system harms.

Support refers to the workers and research within the federal government, with NTIA recommending that the government invest in necessary resources to meet the national need for independent evaluations of AI systems, notably through supporting the operations of the U.S. AI Safety Institute and by establishing a post-pilot National AI Research Resource — which was launched by the National Science Foundation in January.

On the research front, the policy report states that federal officials need to focus on the creation of accessible tools to assess what data is used to train a given AI system to gauge limitations and capabilities. 

The final pillar, regulations, advocates for independent audits and regulatory inspections of high-risk AI models and systems. It also includes suggestions to increase the federal government’s ability to address risks and practices surrounding artificial intelligence, which extend to a potential registry of high-risk AI deployments that handle sensitive data and outcomes. 

The NTIA also noted that federal contractors should be required to adopt similarly “sound AI governance and assurance practices.” 

The report is a result of public response to NTIA’s request for comment on AI accountability, issued in April 2023, that generated over 1,400 responses. 

“The comments submitted to the RFC compose a large and diverse corpus of policy ideas to advance AI accountability,” the NTIA wrote. “While there were significant disagreements, there was also a fair amount of support among stakeholders from different constituencies for making AI systems more open to scrutiny and more accountable to all.”

NTIA found significant agreement in concerns over potential AI system harms; ensuring accountability across the AI software development and deployment lifecycle; developing sector-specific oversight measures; and funding the growth of AI accountability as a field.

Serena Oduro, a Senior Policy Analyst at the nonprofit advocacy group Data & Society, said that her group’s response to the NTIA’s request for comment highlighted the need to combine technical solutions with social science methods to maintain a human-centric approach to AI accountability.

“AI systems may only work for us when the technical and social aspects that influence the efficacy of AI systems are grappled with,” Oduro said in a statement to Nextgov/FCW. “Technical solutions alone will not produce accountability. But, technical approaches married with social science methods, the public's involvement, and regulatory action can lead us to an AI ecosystem that is robust and earns the trust of the public.”

Oduro added that, given the NTIA functions in an advisory capacity to the executive branch, these recommendations can potentially influence how federal agencies implement President Joe Biden’s October 2023 Executive Order on AI, which advocates for a strong human-centric approach to AI design. The new report also has the potential to influence private sector companies to abide by similar regulatory parameters in their AI creation and deployment.

“Due to NTIA’s positioning within the Department of Commerce, I hope that NTIA uses the report to highlight that long-term market success relies on companies being open and accountable to the public,” Oduro said. “The public deserves to know that AI systems that impact their access to crucial services and mediate their lives are scrutinized.”