Fighting bots and comment spam
- By Lia Russell
- Jan 31, 2020
The federal rulemaking system includes periods of public input and discussion, but frequently the systems that collect comments are subject to spam, disinformation and other campaigns designed to spoof massive public support for positions that does not exist.
A report from a Senate Homeland Security subcommittee published in October found that agencies were not well equipped to handle massive amounts of fraudulent and otherwise suspect statements, as evidenced when millions of bot-generated comments in favor of repealing net neutrality flooded the Federal Communication Commission's online public comments system in 2017.
Experts say submission of fraudulent comments -- ones that are attributed to someone other than the true identity of the user -- comes with its own set of issues.
"We're trying to make explicit the social and economic costs of fraudulent comments," MITRE Corp.'s Sanith Wijesinghe said at a Jan. 30 talk at the General Services Administration. "There were 22 million comments on net neutrality. Half were from stolen identities. Now we have on public record a statement attributed to someone without their consent, which poses all kinds of issues, particularly for next-of-kin if the person is deceased."
It also raises the question of how much taxpayer money should be spent on processing comments that probably originated in foreign countries, he added. "As we saw with the Evidence-Based Policy Act, the overhead associated with tracking comments is non-trivial."
Prof. Steven Balla of George Washington University argued that most mass-comment campaigns are from legitimate advocacy groups and not the product of shadowy foreign entities. The results of a study he conducted analyzing the Environmental Protection Agency's rulemaking process showed that most of the comments it received during public comment periods were from environmental rights organizations. He defined mass-comment campaigns as "collections of identical and near-duplicate comments sponsored by organizations and submitted by group members and supporters" -- a legitimate function of civil service organizing in a democracy.
"The center of gravity [with mass-comment campaigns] falls towards mere statements of directional preference," he said in a panel presentation at the GSA event. "We don't see mass-comment campaigns as an abuse of the rulemaking process. In that particular space, it's a legitimate use of public comment." He added that, "agencies haven't been inundated with anything that has halted the rulemaking process."
Federal agencies have sought to identify fraudulent comments, bots and mass-comment campaigns using a mix of available technologies and improving integration when it comes to monitoring public comments systems during the rulemaking process. The issue is ensuring that citizens can both participate in the comment process while removing "fake" comments without shutting out legitimate discourse.
"It's a personal choice to reveal your identity or not, and there are very valid reasons for that," Wijesinghe said.
There are tools available to force users to authenticate themselves before posting comments, Google's head of global regulatory affairs Michael Fitzpatrick said, citing the use of CAPTCHA interfaces that screen out bots. "It adds friction to the process without pushing back against people who have less resources and sophistication."
Fitzpatrick added that there are invisible versions of CAPTCHA that can be used to screen every single transaction on a website using an individual token.
Fitzpatrick added that the technology he described was directed at bot submissions only. Ultimately, human-driven fraudulent comments are the hardest to monitor and requires a policy discussion.
"I'd love to see a Schoolhouse Rock video for the rulemaking process," he said. "Almost no American understands it, from a civics point of view. We know how a bill becomes a law but not how a law becomes a regulation."
Lia Russell is a former staff writer and associate editor at FCW.