The disinformation game

The federal government is poised to bring new tools and strategies to bear in the fight against foreign-backed online disinformation campaigns, but how and when they choose to act could have ramifications on the U.S. political ecosystem.

Shutterstock photo id 669226093 By Gorodenkoff
 

The federal government is poised to bring new tools and strategies to bear in the fight against foreign-backed disinformation campaigns online, but how and when agencies choose to publicly identify such campaigns could have ramifications on the U.S. political ecosystem.

Deputy Attorney General Rod Rosenstein announced in July that the Department of Justice set up a new task force to counter foreign influence efforts and signaled a more active role for federal agencies. The Departments of Homeland Security and State have also set up similar task forces or programs to counter "malign foreign influence operations" online and offline.

Rosenstein framed the new efforts as a tech-focused update to the same interagency government task forces set up by President Ronald Reagan in the 1980s to counter Soviet propaganda campaigns targeting the U.S. populace.

"Some people believe they can operate anonymously through the internet, but cybercrime generally does create electronic trails that lead to the perpetrators if the investigators are sufficiently skilled," Rosenstein said.

The FBI and DHS already work with state and local governments on election security measures, provide threat briefings to the private sector and hand out public indictments of hackers and troll factories associated with the Russian government. The State Department's Global Engagement Center works to counter propaganda efforts by terrorist organizations abroad. In 2016, the center received a broader mandate from Congress to tackle state-sponsored disinformation operations.

Now these agencies are exploring the use of technologies like artificial intelligence, machine learning and other tools to map out and identify coordinated influence campaigns online and trace them back to their source.

"Pretty soon, we should be able to very accurately predict when disinformation campaigns are coming based on tracking and mapping the troll factories," said Shawn Powers, executive director for the advisory commission on public diplomacy at State. "We will be able to actually identify these campaigns within maybe even 24 hours of when they start, which then gives us a chance to get really proactive instead of reactive."

While many of the non-technical aspects of State's anti-disinformation program have been around for decades, a State Department spokesperson told FCW that the technological component is still in its infancy and is aimed at countering disinformation campaigns in other countries, not the United States. According to the spokesperson, the department is currently relying on a mixture of commercial off the shelf products and a machine learning algorithm developed in-house that can monitor social media activity and content, identify trends in conversation, flag botnets and identify false personas online.

State is also cultivating relationships with private sector tech companies to further build out its technical capabilities. Still, even in its limited state the program has presented State officials with logistic and ethical dilemmas around free speech and privacy.

"As you dig into the problem, it quickly becomes apparent that it's not as easy [to expose these operations] as it might otherwise seem," said the State Department spokesperson. "There are people who use these things for legitimate reasons or conceal their identities not for nefarious reasons, but for privacy."

Bots or free speech?

The concerns are more than hypothetical. In July, Facebook shut down 32 accounts, pages and groups tied to mostly progressive and leftist political causes that company officials claimed displayed "coordinated" and "inauthentic" behavior reminiscent of an influence campaign. While Facebook officials declined to make a firm attribution as to the origin, they said the campaign bore similarities to the Russia-based Internet Research Agency activity tracked during the 2016 election, with adjustments like the use of virtual private networks and third-party ad-buyers to further obfuscate the location and identity of users.

However, the fallout from those actions highlight just how tethered influence operations can be to regular online discourse and underscore how perilous it can be for large companies or governments to enter into the fray. The day after Facebook made its announcement, individual American citizens involved as organizers or administrators of the 32 shuttered accounts and pages complained that Facebook had censored them without notice and unfairly painted their otherwise-legitimate groups and movements as puppets of a foreign-backed influence campaign because of the actions of a few people.

"We've since created a new Facebook event but we know real organizing comes from talking with our neighbors, and that this is a real protest in Washington, D.C.," said one of the groups on Twitter shortly after the ban. "It is not George Soros, it is not Russia, it is just us."

Investigations by U.S. intelligence agencies and Congress have found that Russian troll factories like the Internet Research Agencies deliberately latched onto pre-existing groups and movements already present within the American political ecosystem. In many cases, these operations identified and infiltrated online groups largely run by Americans focused on issues like anti-fracking, genetically modified foods and campaign finance corruption, in order to gain access to audiences and influence voting priorities.

Who goes public?

As part of its cyber digital task force report, the DOJ released its policy for disclosing foreign influence operations and the various factors they must weigh and consider. The policy states that "it may not be possible or prudent to disclose foreign influence operations" due to operational, investigative or other constraints. Even when it does take action, the DOJ and other federal agencies "will not necessarily be the appropriate entity to disclose information publicly" about such operations.

While taking questions at an Aug. 6 event in Washington D.C., Adam Hickey, deputy assistant attorney general at the Department of Justice, told FCW that the question of when and how to publicly identify and mitigate an ongoing influence campaign touches on "a real sensitivity." Even as many officials feel that inaction by the government may have only further emboldened Russia during the 2016 elections, agencies must still thread a narrow needle between pushing back more forcefully without delegitimizing certain points of view or appearing to put their thumbs on the scale during active political debates.

"I don't want to suggest the government is likely or largely or frequently going to be weighing in on the truth of a particular argument," Hickey said. "In fact…there are a lot of reasons why we might not do that and one of the principle reasons is avoiding even the appearance of partiality. So, if you're talking about misinformation in the context of an election, that's going to be a situation where we're particularly cautious."

On the other hand, Hickey said that disinformation about the government itself, such as online or text campaigns that spread false information about voting locations or times, are instances where DOJ and others may have a higher interest in alerting the public.

In a sideline conversation after his speech, Hickey declined to say what kind of technologies DOJ or other agency task forces may be using to track and monitor campaigns in the digital space, saying only that they would need to be "lawful" and focus on passing relevant evidence gleaned from technical forensics and intelligence sources to social media companies and other online providers.

"Obviously there has to be a way to lawfully deploy technology to do that," said Hickey. "What that is, I don't have a clean, easy answer for you. The folks who have an edge here are providers who obviously see what happens on their system who have more of a technology base, and who are in a position to make decisions about how best to make sure that platforms are used in the way they want them to be used."

Whether technology today is capable of accurately identifying and separating malicious foreign content online from domestic speech is an open question. Similar efforts outside of government, like the Hamilton 68 dashboard, purport to identify and publicize Russian influence operations by tracking what their bot networks are amplifying on social media platforms. However, the dashboard has been criticized in the media and even by one of its own creators for spitting out dubious assertions or misidentifying organic online activity as nefarious and coordinated foreign influence.

During a Senate Intelligence Committee hearing in July, Sen. James Risch (R-Idaho) asked a panel of experts, including Todd Helmus, a senior behavioral scientist at RAND Corporation, how their methods of tracking foreign influence operations online were able to surmount the "enormous if not impossible" task of segregating malicious foreign actions from American citizens engaging in protected free speech.

"That's challenged our bot detectors," said Helmus. "There are bot detectors available that can detect some types of content that mimic the characteristics of bots, but it is an arms race as developers develop ways to detect bots based on their inhuman levels of content, timing of tweets or what have you. Producers of those bots will then identify other ways of circumventing that and staying covert."

The Department of Homeland Security's 10-member task force is currently canvassing for partners in the research, policymaking and tech communities. Two DHS officials speaking in a not-for-attribution setting said the task force was looking "to build long-term capability in this space" and examining past disinformation campaigns centered around incidents like the poisoning of U.K. double agent Sergei Skripal or the Parkland school shooting, to spot common behaviors or anomalies in online discourse that indicate a coordinated campaign with the intent to spread false information.

Like representatives from other agencies, the DHS officials said they worry about whether the government taking a more active role in publicly identifying such campaigns could do more harm than good if it's not executed correctly.

"We are fairly certain that in a lot of contexts, we are not the right messenger," said one official. "Hasty policy is rife with unintended consequences, so part of this is mapping out an idealized solution and figuring out a right way to tack a policy solution onto it. Because it's really easy to say 'hey the government's not doing this, why isn't the government doing that?' but it's more useful to understand government's role in context with the other actors in this space."

Still, some officials feel that doing something is better than letting misinformation and disinformation go unchallenged. The State Department spokesman said that when the government is faced with a choice between calling out a charge that is untrue and harming U.S. interests or letting it fester, pushing back against false information should win out.

"I guess the answer is, in a sense, it doesn't really matter," the spokesperson said. "If there's a campaign out there in the Baltics with NATO allies and there's a narrative -- whether foreign pushed or not -- that they're engaged in horrible behavior and raping and pillaging and we know that's not true, it's our job to blunt that narrative."