Do our algorithms have enough oversight?

Unconscious bias is just one of the unintended consequences complicating government's use of AI.

human face algorithms (whiteMocca/Shutterstock.com)
 

As companies continue to look to leverage artificial intelligence and other innovations in the workplace, tension has arisen between profit-seeking companies and the impact of the technologies they utilize, highlighting the ongoing need for oversight and research.

At a Nov. 20 panel on “Artificial Intelligence At Work” hosted by Workday and Politico, industry experts stressed that while neural networks and other innovations had the ability to streamline or even automate work previously performed by human operators, managers were still needed to step in and make corrections when machines failed to account for human error such as ingrained algorithmic bias.

Rep. Bill Foster (D-Ill.) pointed to the history of loan discrimination against minority groups to prove his point, highlighting the need for companies to be able to overwrite their neural networks if the need arose to prevent unintended bias. “It is statistically true that people of different racial groups are more or less likely to have relatives that are wealthy because of unjustifiable past discrimination,” he said at the panel. “The problem is you then can have two identically situated families [represent] two different racial groups, so the neural network will identify proxies for race, and you’re left with a choice. Are we going to tell the neural network “No” that even though this is a statistically valid way to maximize your profits?”

Part of the problem is that bias is hard to assess, said National Science Foundation Assistant Director Dr. Dawn Tilbury, who heads NSF's Engineering Directorate. She pointed to her agency's Future of Work at the Human-Technology Frontier program as one effort to monitor how humans and technology interact, particularly when the outcome of algorithms can easily be mapped, but their intention is harder to parse.

“How would we define that a data set is unbiased or fair? I don't think we have that definition,” she said. “[The NSF] wants to make sure that these algorithms are fair, whether they’re used for hiring or whatever. A lot of what the government can do is fund basic research to help advance these understandings. Because the private industry is just going to do what's most profitable.”

In an interview, Tilbury told FCW the NSF scrutinizes its own projects as much as possible to make sure the work they’re funding isn't producing biased results.

“For example, we look at how many of our project proposals were submitted by women, then how many women-led projects were actually funded,” Tilbury said. “If only a certain amount of women’s projects were funded or submitted we ask, ‘What happened there?’”

She pointed out that there were studies that showed that projects from people with foreign-sounding names, or whose names sounded more feminine, often did not get accepted for funding, something the NSF looked to push back against. “If you feed that bias into your algorithms, you’ll get the same biased results unless you retrain the system.”

Rep. Foster appeared to agree with Dr. Tilbury’s assessment on the panel, stating that while there wasn’t a specific law on the books to combat discrimination in technology, efforts were being made within government to address the interactions of discrimination and AI.

“We have a couple of papers coming out that address this point and the problem with AI solutions and hiring when you look at the non-digital world, [such as] the kind of the criteria to evaluate bias and discrimination concerns based on outcome measures,” he said. “Did the employer exhibit evidence of intent to discriminate? How do you measure the intent of an algorithm?”

NEXT STORY: Vendors offer EIS advice