Technologists, privacy advocates push back against algorithmic 'extreme vetting'

The Trump administration's plan to use predictive software to rate threats posed by immigrants, refugees and asylum seekers will result in discriminatory decisions, say a group of privacy advocates and computer scientists.

Shutterstock: Stock photo ID: 618887135 By Zapp2Photo
 

The Trump administration's plan to use predictive software and data mining to evaluate threats and benefits posed by potential immigrants and visitors is likely to be arbitrary and discriminatory, according to letters from technologists and privacy advocates.

The letters are in response to an ongoing procurement by Immigrations and Customs Enforcement as part of President Donald Trump's executive order on immigration that calls for the extreme vetting of would-be immigrants. The ICE solicitation calls for the use of data mining and predictive analytics to make judgments on which migrants might pose a terrorist or other threat, and which would "make contributions to the national interest."

In a Nov. 16 letter to acting Homeland Security Secretary Elaine Duke, 54 computer scientists, mathematicians and other technologists urged ICE and the Department of Homeland Security to abandon the effort to contract with firms to design vetting software, arguing that the stated goals of the "extreme vetting" policy will generate poor specifications and result in discriminatory or arbitrary results.

"Algorithms designed to predict…undefined qualities could be used to arbitrarily flag groups of immigrants under a veneer of objectivity," the group writes. The signatories include three former Federal Trade Commission technologists: Ashkan Soltani, Ed Felten and Lorrie Faith Cranor, along with others with institutional affiliations ranging from Harvard and MIT to Microsoft and Google.

The signers are also concerned that rare events like terrorist attacks are difficult to predict using algorithmic tools.

A second letter to Duke from 56 privacy groups warns that any ICE vetting system would look to "proxies" as inputs to form the basis of prediction – including social media posts to judge potential threats and income information to assess potential value. The value of these sources is dubious, they argue.

"The meaning of content posted on social media is highly context-dependent. Errors in human judgment about the real meaning of social media posts are common. Algorithms designed to judge the meaning of text struggle to make even simple determinations, such as whether a social media post is positive, negative, or neutral," the letter says.

An effort led in part by the Brennan Center for Justice at New York University is looking to steer federal contractors away from the project. There is also a pressure campaign targeting IBM in particular, which has attended events relating to the ongoing procurement.