Labor Dept. watchdog urges 'extreme caution' on facial recognition
Since the start of the pandemic and an upswing in fraudulent claims in the jobless aid systems, the use of facial recognition in identity checks has spread across states, raising OIG concerns about access and privacy.
The Labor Department’s watchdog said there are “urgent equity and security concerns” around the use of facial recognition in unemployment insurance programs that “need immediate attention,” in an alert memo released Tuesday and called for more guidance.
The DOL Office of the Inspector General is “concerned that the use of identity verification service contractors may not result in equitable and secure access to [unemployment insurance] benefits,” the memo said, calling for “extreme caution to ensure claimants are not subjected to discrimination by the use of facial recognition technology.”
Remote identity verification achieved digitally through data checks or facial recognition is a relatively new process in the unemployment system.
When Congress provided expanded relief funding at the start of the pandemic to address lockdowns and associated economic disruptions, state unemployment systems became a target for fraudsters using stolen and synthetic identities to try to get benefits.
As states scrambled to respond, many proliferated the use of private companies to perform identity checks.
Out of 53 state workforce agencies, 24 told the inspector general that they use an identity verification contractor that uses facial recognition technology, with two agencies not responding to a survey from the watchdog. The agencies used 10 different private companies total for facial recognition technology.
Over 90% of the states using facial recognition told the inspector general that contractors have helped reduce fraud, but the alert memo draws out concerns about access across demographics like race and gender as well as privacy.
The memo pointed to a 2019 National Institute of Standards and Technology study that found "empirical evidence the algorithms used in current facial recognition technology have a racial and gender bias."
Different algorithms have varying levels of performance, though, and since 2019, the technology has generally improved, a top NIST official recently told lawmakers.
In addition to algorithms, the quality of photographs – particularly exposure levels in photographs of people with darker skin tones – also plays a big role in accuracy, according to NIST.
That could be problematic in the context of online unemployment applications with identity checks where the people applying are tasked with taking quality photos, the watchdog memo points out. People might have different camera qualities or technical skills.
A major player in terms of selfie-matching ID checks is vendor ID.me, which a spokesperson for the company said is used in 25 state jobless aid programs.
One point of reference in the memo is a 2022 internal study on the use of ID.me in Oregon that found “differences in ID.me completion rates among some demographic categories,” although it didn’t pin down causation for those differences.
Interviewed claimants cited technology barriers and confusion about how to verify themselves. Oregon added “mitigation strategies” like in-person help after the study.
Asked about data in terms of how the vendor prevents fraud, an ID.me spokesperson referred FCW to public statements about fraud prevention from states, like a California press release stating that the vendor has helped stop over $125 billion in attempted fraud.
In terms of equity, Blake Hall, ID.me CEO, via email, cited a NIST study that Paravision, the facial recognition provider used by ID.me, has tested at a 99.3% accuracy rate or better across various demographics.
Hall also pointed to human checks in the system and the option for government agencies to purchase a configuration by ID.me that gives users options between the self-serve selfie checks, video checks that don’t use facial recognition tech and in-person ID checks.
According to an ID.me spokesperson, first-time ID.me pass rates are over 80%.
The OIG said that the Labor Department needs to give states more guidance.
“ETA has provided minimal guidance that specifically addresses facial recognition technology in administering UI benefits,” the memo said. “Without comprehensive guidance, [states] are at risk of using technology that discriminates against claimants entitled to receive [unemployment] benefits and of not adequately safeguarding claimants’ [personal identifiable information].”
In a response from the Labor Department included in the memo, Brent Parton, acting assistant secretary, wrote that the department will issue guidance requiring states “to provide at least one timely, effective, and accessible non-digital alternative to online ID verification” as recommended by the watchdog. States are already required to give alternatives to ID verification.
The department will also emphasize to states that they need to identify and fix any problems with equitable access around identity verification, Parson wrote, noting that existing regulations and guidance already require states to collect and study demographic data to look for any signs of discrimination.
The second concern the Inspector General identifies in the memo is privacy and security.
The watchdog reviewed contracts between states using facial recognition and vendors and found that not all states included requirements around data storage, data disposal, standards for facial recognition technology, audits or the type of facial recognition matching being used.
The Labor Department will issue more guidance with recommended provisions to put into contracts, Parsons wrote.
ID.me’s Hall told FCW that the company “does not sell user data” and has advocated for federal privacy legislation when asked about the watchdog’s concerns around privacy.