Protecting government IT systems at the source
- By Mark Warren
- Oct 14, 2015
It is all too tempting to start an article about the cybersecurity of federal systems with a reference to Edward Snowden or the recent breaches at the Office of Personnel Management. Yet regardless of how serious those cases are, they do not represent all the potential insider threats that must be considered by anyone with a responsibility for IT security.
That's not to underplay the cost of such breaches. Both were very serious episodes. But they are examples of the kind of insider threat that organizations have been aware of for many years -- the theft or leakage of sensitive data.
Most IT organizations, however, have a blind spot regarding the value and risk of protecting the IT systems themselves. Many products will protect the perimeter -- firewalls and virtual private networks, for example. But they are never 100 percent foolproof, and the risk for those systems stems particularly from the privileged insider, someone who has legitimate access to both the systems and the source code used to create applications that manage said systems. The FBI published a warning about this increasing threat from insiders as recently as September 2014.
There are a variety of insider threat scenarios to consider:
- The hacktivist with a grudge to bear against a federal employer.
- The employee who's about to leave and wants to take some useful bits and pieces along.
- The employee who sells confidential data for profit or under duress.
- The unwitting employee whose account credentials have been compromised by phishing attacks.
- The remote developer who might be sharing his or her credentials while working for a contractor.
The source code behind government applications is especially valuable -- not only because of the investment made to create it, but also because of the highly sensitive nature of the systems themselves. The broad set of applications to consider includes border control checks, military logistics and tax payments. The breach of the source code for any of those systems could have consequences far beyond simple financial loss.
Moreover, the risks are growing more critical because of the changing ways in which systems are developed. A particular risk many have started to see is the use of low-security development tools -- in particular Git and related tools such as GitHub -- to store source code repositories for development teams. Although the tools are attractive for their low cost and ease of use for developers, they were never built to be secure systems. They lack basic capabilities such as file-level access control, immutable change history and protections that are easily viewable and managed.
A recent Gartner report highlighted the risk of those open-source tools and recommended the adoption of a centralized, secure master repository and rugged operating procedures if developers are to continue using such tools.
On the good-news front, we are starting to see a new class of threat detection tools that can watch the source code repository and spot risky behavior. The tools would flag, for example, users who start accessing code they don't normally use, take more files than others with a similar role or access particularly sensitive projects outside normal work hours.
One danger, of course, is that the tool might report too many false alarms because of the volume of repository transactions. Therefore, the best tools not only understand the particular context of source code development, but also use behavioral analytics to make intelligent alerts.
They say the first step to any solution is acknowledging you have a problem. So start now by understanding how your applications are developed, determine if your development tools are secure and identify the measures you have in place to detect any risky behavior.
Mark Warren is product marketing director at Perforce Software. He has more than two decades of experience in the software industry with roles as a provider and consumer of advanced development tools.