Can the IC police foreign disinformation on social media?

The latest Intelligence Authorization bill includes $30 million for a new research center where social media companies, researchers and journalists work together to study and expose online disinformation campaigns.

network of people and metrics (Sergey Nivens/Shutterstock.com)
 

An altered video of House Speaker Nancy Pelosi appearing to stutter and slur her words generated millions of views on Facebook and other social networks late last week and was retweeted by President Donald Trump.

The social network appended an alert to users looking to share the altered video that the material was the subject of additional reporting, but it declined to remove it.

Monika Bickert, Facebook's vice president for global policy management, said in a May 24 interview on CNN that the company draws a distinction between misinformation that is a threat to public safety and misinformation that is merely political.

"If there were misinformation that was, let's say, tied to an ongoing riot or the threat of some physical violence somewhere in the world, we would work with safety organizations on the ground to confirm falsity and the link to violence, and then we actually would remove that misinformation," Bickert said. "But when we're talking about political discourse and the misinformation around that, we think the right approach is to let people make an informed choice," she added later in the interview.

Some lawmakers are looking for the federal government to get into the "informed choice" business, when it comes to malicious online disinformation generated abroad.

The Senate Intelligence Committee passed an authorization bill last week that includes a provision offered by Sen. Mark Warner (D-Va.) giving the Director of National Intelligence and Secretary of Defense authority to establish a new, $30 million Social Media Data Analysis Center to analyze and publicize data around ongoing foreign influence operations online.

The center envisioned by the bill would be run by a non-profit but funded by the government. It would pull in representatives from social media companies, non-governmental organizations, data journalists, research centers and academics to sift through and analyze data across multiple social media platforms to detect and expose the clandestine foreign propaganda campaigns that U.S. intelligence agencies and disinformation experts say are becoming increasingly commonplace in the digital ecosystem.

Harvey Rishikof, former senior policy advisor to the director of national intelligence, said the government can have more success defusing the impact of disinformation campaigns not by pushing for laws to authorize the removal of content, but by clearly and publicly mapping out the origins of that content. In demonstrating how a propaganda campaign was created and distributed through the information stream, it can drain that campaign of its power to organically influence a debate.

"It's going to be a requirement for [the U.S. government] to be quite clean and clear, and then the question is whether or not you believe it, because the credibility of governments is under attack," Rishikof said at a May 21 cybersecurity event hosted by the American Bar Association. "You're looking for independent authenticators that a rational third party would say doesn't have any prejudice to be involved in the reveal of what is taking place.

The Department of Justice put out a policy last year detailing how it would respond to ongoing foreign influence campaigns, but such operations often intentionally recruit Americans to co-sponsor online groups and pages that blur the lines between foreign interference and protected free speech.

Deputy Assistant Attorney General Adam Hickey told the House Oversight and Government Reform Committee May 22 that among the strategy's first principles is to avoid partisan politics.

"Victim notifications, defensive counterintelligence briefings and public safety announcements are traditional department activities, but they must be conducted with particular sensitivity in the context of foreign influence and elections," said Hickey. "In some circumstances, exposure can be counterproductive or otherwise imprudent."

The perception that the government might be placing its thumb on the scales of American political debate has spooked DOJ and other agencies from taking a harder line. While intelligence and law enforcement agencies worked to uncover and expose foreign online campaigns leading up to the 2018 mid-term elections, they preferred to pass that information along to private social media companies to remove and publicize those actions.

To establish the proposed Social Media Data Analysis Center, U.S. officials would need to work out information sharing protocols among the government, social media companies and the public and develop rules around which groups are eligible to participate in the center. They would also need to negotiate with social media companies and researchers over privacy protections and what data and metadata would be shared for analysis.

If funded, the center wouldn't go live until at least 2021. Under the terms of the bill, the Director of National Intelligence would need to submit a report to Congress by next March, laying out funding needs, liability protections for all parties involved in the center, proposed penalties for misusing the data and any changes to the center's mission needed to "fully capture broader unlawful activities that intersect with, complement or support information warfare tactics."