How DOD embraced bug bounties -- and how your agency can, too

Hack the Pentagon proved to Defense Department officials that outside hackers can be assets, not adversaries.

Placeholder image for article template

It was a Tuesday in April, and Mark Litchfield was poking around the Defense Department's Defense Video Imagery Distribution System, looking for security holes.

It didn't take him long to find one. He soon uncovered a vulnerability known as a blind persistent cross-site script. It could enable any maliciously minded hacker to log in as a site administrator and broadcast whatever content he or she wanted from the DVIDS website -- which is the primary way the U.S. military keeps the public informed about its activities around the world. The hacker could also have accessed the email messages of the registered users of DVIDS.

“As you can imagine, [Islamic State militants], if they had launched that kind of attack, they would have had a field day if they could upload whatever they wanted onto a website that's run by the military,” Litchfield said.

Such propaganda risks are hardly hypothetical; last year, Islamic State sympathizers hacked into U.S. Central Command's Twitter feed and YouTube accounts.

Luckily it was Litchfield, a security researcher and entrepreneur, who discovered the vulnerability -- and he did so at DOD's invitation.

Had he discovered the problem under regular circumstances, it would not have been clear what he could have done about it. Like most other websites, DVIDS does not provide explicit instructions on how to responsibly report problems. Instead, in the “Privacy and Security” section, DOD threatens prosecution for any unauthorized attempts to upload or change the information provided by DVIDS.

But Litchfield was able to report the problem -- and 35 others -- without fear of prosecution because he was participating in DOD's pilot “Hack the Pentagon” bug bounty program, which invited vetted members of the public to rummage around five media-related DOD websites with the goal of uncovering security problems.

Beginning April 18, DOD asked 1,410 security researchers who had registered for the challenge on the HackerOne platform to find vulnerabilities at defense.gov, dodlive.mil, dvidshub.net, myafn.net and dimoc.mil. DOD officials spelled out the terms and conditions under which the activities could be conducted and explicitly stated that hackers would not be prosecuted if they stayed within the given parameters.

Bounty hunters would be awarded cash amounts based on the severity of the bugs they found. Litchfield won the top payout of $15,000. The lowest amount awarded was $100.

Defense Secretary Ash Carter declared the program a success: Ultimately, 252 hackers submitted at least one vulnerability each, and 117 received payouts. The Pentagon promptly fixed all the uncovered bugs.

“Through this pilot, we found a cost-effective way to supplement and support what our dedicated people do every day to defend our systems and networks, and we've done it securely, and we've done it effectively,” Carter said at a June event announcing the results of the program.

It went so well that DOD asked its IT managers to examine all the other areas that could benefit from a bug bounty security checkup. Officials also plan to change DOD contracts to require vendors to submit their products to bug bounty security checks in some instances. And officials will issue a responsible bug disclosure policy to enable security researchers to report bugs without fear of prosecution.

In addition, DOD announced on Oct. 20 that it had contracted with HackerOne, a bug bounty management company, and Synack, a firm that provides crowdsourced security testing and intelligence, to enable DOD components to easily launch their own versions of Hack the Pentagon-style challenges. 

All in all, it's a huge pivot for DOD's top-down culture. And as the pilot program made clear, Defense agencies will have to change further so that bug bounties will successfully scale.

A bounty of work

Crowdsourcing a security checkup sounds fairly straightforward, but two of the architects of DOD's program said a lot of organizational work is involved.

“It's not just a matter of throwing up an email online and seeing what happens,” said Katie Moussouris, founder and CEO of Luta Security. She created Microsoft's first bug bounty program and helped set up DOD's Hack the Pentagon program. “You have to prepare, and most organizations are not prepared.”

Agencies must consider the resources they have before they embark on a similar program, said Lisa Wiswell, the Defense Digital Service's digital security lead.

Those with mission-critical public-facing websites should consider adding bug bounty programs to their existing security and penetration testing procedures, but they should also consider hiring a contractor to manage the process. Among other things, that approach frees up internal technology staffers to focus on squashing the bugs.

For example, DOD received 1,189 bug reports in the Hack the Pentagon pilot, of which only 138 qualified for payouts. Someone had to cull through those reports and verify which ones were valid and which were duplicates.

“The amount of work that people would have had to do to cull through that, to make sure that those reports were robust and to make sure that we could act on them would have created more [of a problematic workload] than would have helped,” Wiswell said. “By paying a contractor, you've outsourced a tremendous amount of the work, and you've allowed the people who know the code base to focus on remediating the vulnerability.”

The other advantage to using the services of the emerging class of companies -- which includes HackerOne, Synack, Bugcrowd and several others -- is that they have created platforms where security researchers congregate. Furthermore, the companies have established reputations, expertise in managing what can be an unwieldy process and experience in specifying the parameters of a bug bounty contest. They also know how to triage and assign priorities to the order in which bugs should be fixed.

And figuring out who is responsible for fixing a bug can be complex, Moussouris said.

“I started Microsoft vulnerability research, and one of our big missions was doing multivendor coordination,” she said. “Someone would report an issue with the [Internet Explorer] browser, for example, but when researching it, we found that it actually was a Flash issue. So we would pass it on to Adobe.”

Organizations pay companies like HackerOne to manage the process, and they pay for the vulnerabilities that are discovered. With traditional penetration testing, consultants receive flat fees whether one bug is discovered or 100.

“Instead of paying for the almost 1,200 reports that came in, we paid for the 130 or so bugs that we could act on,” Wiswell said. “We paid for those where we could say: ‘You've given us enough of an explanation, and we know how to fix this, and we're going to go ahead and fix it.'”

The beauty is that bug bounty challenges shorten the bug discovery and fixing cycle from months or even years to a few days or weeks, she added.

But the key is fixing the problems quickly and not potentially exacerbating the security lapse by failing to act on a discovery in a timely fashion, Wiswell said. It's also helpful to keep researchers updated on what is being done with their discoveries -- whether and when the problem will be fixed or whether the problem is already known.

Spelling out the rules as precisely as possible is essential, Wiswell said. “We spent a tremendous amount of time with our legal team and all of the stakeholders across the departments to make sure that we defined our rules and restrictions down to a T,” she said. “You have to make sure that you tell folks what they can do and, almost even more importantly, what they cannot do.”

A spat between a security researcher and Facebook in December 2015 illustrates at least one way that things can go wrong. Wes Wineberg discovered a bug in Instagram and reported it to parent company Facebook. Alex Stamos, Facebook's chief security officer, said Wineberg was thanked and offered $2,500. But he was unhappy with that amount so he hacked into the Amazon Web Services account associated with Instagram, where he started accessing technical information. He then told Facebook that he was going to write about what he had found.

Wineberg worked as a contractor for Synack, so Stamos contacted CEO Jay Kaplan and explained the situation to him. He told Kaplan that he thought Wineberg was acting unethically. According to Stamos, Kaplan said Wineberg's actions were neither ordered nor condoned by Synack.

For his part, Wineberg said Facebook could have avoided the contretemps if its bug reporting policy were clearer -- like Microsoft's, which states that “moving beyond ‘proof of concept'” to executing attacks is not acceptable behavior.

Stamos concluded that Facebook should have moved to fix the problems faster and perhaps been more explicit about what it considers to be ethical behavior and what it doesn't.

Many companies that run bug bounty programs provide lists of dos and don'ts. GitHub, for instance, asks researchers not to access its users' accounts or data, not to launch denial-of-service attacks and not to publicly disclose bugs until they are fixed, among other things.

A glance through the community-curated listings of bug bounties and disclosure policies at Bugsheet.com reveals some commonalities among companies when it comes to disclosure policies (i.e., don't publicly disclose the problem until it has been fixed), but the level of detail provided in the policies varies widely. For example, Uber's bug bounty policy explicitly states that payout amounts are not pinned to the vulnerability itself but to the severity of the potential impact.

“This means, for example, that we will issue a relatively high reward for a vulnerability that has the potential to leak sensitive user data, but that we will issue little to no reward for a vulnerability that allows an attacker to deface a microsite,” the company's policy document states. “When we have our reward meetings, we always ask one question: If a malicious attacker abuses this, how bad off are we? We assume the worst and pay out the bug accordingly.”

Although every organization operates in its own way, the International Organization for Standardization and the International Electrotechnical Commission offer a comprehensive policy document (with input from Moussouris) that serves as a valuable reference point for vulnerability disclosure processes and policies. The document is numbered ISO/IEC 29147.

'Transparency has to be a two-way street'

David Berteau, former assistant secretary of Defense for logistics and materiel readiness and now president and CEO of the Professional Services Council, said the Pentagon's bug program is an “easily supportable concept” and “a laudable idea.”

“Both the private-sector companies that are contractors to the government and the government itself clearly [have] vested interests in protecting their data and their sites," he said. "I think this reinforces their common protection.”

Berteau added that he couldn't imagine a vendor not taking prompt action if it was notified about bugs in its systems because they often have contractual obligations to do so. Nevertheless, he said he would like to see DOD share the vulnerabilities discovered during bug bounties with the vendor community.

“I think transparency has to be a two-way street,” Berteau said. “I think there are some lessons to be learned and expanded upon, both from the company side and the government side. I think the statement is a very sound idea, but I think that it takes action by all parties to make it work.”

It's a refrain echoed by many in the software industry, who are concerned that the National Security Agency and other federal agencies are themselves undermining the public's online security and privacy by stockpiling vulnerabilities to combat terrorists and child pornographers online and to engage in national security-related activities.

At DOD, however, it appears that officials see bug bounty programs as an efficient way to go after low-hanging fruit -- common vulnerabilities lurking in its millions of lines of code that probably shouldn't have existed in the first place.

In the Aug. 9 request for proposals that led to the new contracts for bug bounty services, one of the requirements was that vendors be able to sort through and prioritize vulnerability reports within 48 hours of receipt.

That seems like an appropriate level of urgency. According to a September 2015 DOD memo, the Pentagon's networks were attacked 30 million times from September 2014 to June 2015.

“Less than 0.1 percent of the 30 million known malicious intrusions...compromised a cyber system,” the memo states, but that still means hackers were able to break into DOD systems as many as 30,000 times over a 10-month span.

Furthermore, “the growing number of cyber intrusions across the department is costing tens of millions of dollars and thousands of man-hours to remediate,” the memo states. And about 80 percent of the incidents “can be traced to three factors: poor user practices, poor network and data management practices, and poor implementation of network architecture.”

After the Office of Personnel Management data breach, the entire federal government saw the dangers of such shortfalls. But another high-profile case -- this one involving British hacker Lauri Love -- illustrates how damaging it can be to leave vulnerabilities unfixed.

In indictments filed with district courts in New York, New Jersey and Virginia, the federal government charged that Love and his co-conspirators exploited vulnerabilities in Adobe's ColdFusion web application to steal information from databases at several federal agencies, including the Army, the Missile Defense Agency, the Environmental Protection Agency and NASA.

“The data stolen from the government victims included the personally identifiable information of hundreds of thousands of individuals, including military servicemen and servicewomen and current and former employees of the federal government,” the indictment states. “The attacks collectively resulted in millions of dollars in damages to the government victims.”

Nevertheless, questions remain about bug bounty programs and the extent to which they can work well in the federal government. For example, what happens if white hat hackers find so many bugs in a project that the assigned pot of bounty money is emptied before a designated challenge period ends?

That was a question posed in a document attached to one of DOD's RFPs for a bug bounty service. There was no clear answer from DOD officials, other than stating that the department would not consider a suspension of the challenge under those circumstances as an incompletion of the task at hand as long as DOD program managers were fully aware of the rules and payout structures from the outset.

Despite such open questions, security experts agree that well-run bug bounty programs with vetted participants are a welcome addition to the existing portfolio of penetration testing and network monitoring programs that federal agencies currently use. For one thing, many traditional penetration testing companies have gotten sloppy, said John Pescatore, director of emerging security trends at the SANS Institute.

Craig Arendt, a security consultant at Stratum Security who participated in Hack the Pentagon, agreed. “Bug bounties can leverage a broad spectrum of talented researcher experience that might not otherwise be available to government," he said. And a well-run program "provides incentives that encourage responsible disclosure.”