Election Security

Former officials flag disinformation as top threat to U.S. elections

voting security 

Two top former national security officials believe that disinformation campaigns may pose a greater long-term threat to election infrastructure than cybersecurity risks.

"Securing the voting apparatus ... that's hugely important, but that to me at least is one bin of the problem," said former Director of National Intelligence James Clapper while speaking at an Oct. 2 Washington Post event. "The other bin is what I would call, for lack of a better term, intellectual security, meaning how do you get people to question what they read, see and hear on the internet? And this where the Russians exploited our divisiveness by using social media, so that part of the problem I'm not sure about."

Clapper said that when it comes to protecting voting machines and other election infrastructure, agencies like the FBI, Department of Homeland Security, National Security Agency and others have "done a lot" since 2016.

At the same event, former DHS Secretary Michael Chertoff argued that in some respects, voting machines may be the "least vulnerable" component due to the decentralized nature of U.S. elections and because the machines are in theory (though not always in practice) disconnected from the internet, requiring physical access to compromise them in most cases.

"The idea here is if you can disrupt the unity of effort of the United States or Europe or other democratic countries, then basically you win without firing a shot," said Chertoff. "Because people don't trust each other and they don't trust institutions."

Public trust for government, the media and other civic institutions has been declining among the U.S. population for decades. What has changed the calculus, Chertoff argued, is the advent of social media and data analytics that can now target groups with highly specific grievances.

"That's an area where we're still trying to implement standards and approaches that would mitigate the effect of that," Chertoff said.

Still in dispute is how such information campaigns affect public opinion. Russia executed multiple campaigns on different social media platforms in 2016, but it also hacked and leaked emails from Democratic National Committee detailing embarrassing information, which were then amplified by bots and trolls online. Some experts believe the hack and leak campaign had a far greater impact.

"Deciding whether to adopt a suggested initiative depends on determining whether the benefit of the initiative, in terms of reducing the effectiveness of Russia's activities, exceeds the associated costs," researchers at the RAND Institute noted in a 2018 report on countering Russian social media influence. "The difficulty with conducting such a calculus, of course, is that the effectiveness of Russia's campaigns -- and thus, the value of thwarting them -- is unknown."

The merging of foreign and domestic actors in such campaigns as well as an increasing willingness by American groups to adopt similar tactics have surfaced free speech issues that make many direct government actions on the issue unpalatable if not unconstitutional.

That dilemma may only become more pronounced in future elections.

Research released this week from cybersecurity firm Recorded Future found that it was "alarmingly simple" to find multiple groups advertising "disinformation as a service" campaigns to buyers on the dark web. Such services were "highly customizable" depending on the customer's needs and could be deployed against a targeted company or individual for thousands or even hundreds of dollars.

In Congress, lawmakers have proposed legislation to require more transparency around foreign ads in social media, ban the use of botnets that spread propaganda online and require that campaigns report contacts with foreign entities seeking to interfere in U.S. elections. None have passed into law, although legislative attention has pressured companies into changing their own disclosure policies.

This week Sens. Mark Warner (D-Va.) and Marco Rubio (R-Fla.) wrote letters to 11 social media companies to encourage the development of new standards, policies and detection tools for "deepfake" videos and audio posted on their platforms.

"Despite numerous conversations, meetings, and public testimony acknowledging your responsibilities to the public, there has been limited progress in creating industry-wide standards on the pressing issue of deepfakes and synthetic media," they wrote."Having a clear strategy and policy in place for authenticating media, and slowing the pace at which disinformation spreads, can help blunt some of these risks."

About the Author

Derek B. Johnson is a senior staff writer at FCW, covering governmentwide IT policy, cybersecurity and a range of other federal technology issues.

Prior to joining FCW, Johnson was a freelance technology journalist. His work has appeared in The Washington Post, GoodCall News, Foreign Policy Journal, Washington Technology, Elevation DC, Connection Newspapers and The Maryland Gazette.

Johnson has a Bachelor's degree in journalism from Hofstra University and a Master's degree in public policy from George Mason University. He can be contacted at djohnson@fcw.com, or follow him on Twitter @derekdoestech.

Click here for previous articles by Johnson.


Featured

  • FCW Perspectives
    tech process (pkproject/Shutterstock.com)

    The obstacles to automation

    As RPA moves from buzzword to practical applications, agency leaders say it’s forcing broader discussions about business operations

  • Federal 100 Awards
    Federal 100 logo

    Fed 100 nominations are now open

    Help us identify this year's outstanding individuals in federal IT.

Stay Connected

FCW INSIDER

Sign up for our newsletter.

I agree to this site's Privacy Policy.