Why you can't stop insider threats

There are no silver bullets for preventing once-trusted employees from spilling sensitive information. But a multilayer approach can reduce the risk and effects of those breaches.

In the wake of the WikiLeaks disclosures, all the soul searching and mandated risk assessments have made one thing painfully clear: Some of the most damaging security breaches originate from inside an agency’s firewalls.

A new study helps quantify that reality. According to the 2011 CyberSecurity Watch Survey conducted by CSO magazine, security breaches caused by once-trusted employees and contractors account for one in five attacks across all industry sectors. Moreover, the consequences of such events can be significant: Insider security breaches are more costly than those by outside hackers, according to one-third of the survey’s respondents.

Such developments are spurring agencies to redouble their efforts to strengthen internal defenses while still balancing the need for trusted insiders to appropriately access sensitive information for their jobs.

There’s just one problem: No matter how diligent agencies might be about security, there are no easy answers. No combination of technology and policy will fully protect against someone with special access privileges who decides to betray that trust.

However, that’s not stopping vendors and consultants from making extravagant claims and aggressive sales pitches.

“Vendors out there are saying, ‘We have the magic bullet,’ ” said David Amsler, president and CIO of Foreground Security, a consulting firm. “But let’s be clear: There’s no such thing.”

So what’s an agency to do? For many organizations, the answer sounds like a mantra from the Cold War era: Trust but verify.

With the right mix of monitoring and access control tools combined with better data protection policies, agencies can make accidental or malicious exposures of sensitive information much more difficult. They might also be able to quickly plug a breach as it’s occurring and limit the damage when all other safeguards fail.

But be leery of anyone who promises all that in one package. Any security approach worth its salt will require multiple layers, experts say. Here are the most likely technology and policy components of a comprehensive solution, along with each element’s strengths and weaknesses.

Data loss prevention

If there are any winners in the aftermath of the WikiLeaks scandal, it might be vendors of data loss prevention technologies. According to some surveys, the applications are becoming one of the most popular answers to insider threats. Also known as data leak prevention, DLP can help managers spot when someone saves classified documents to a local hard drive and then downloads them to a thumb drive or attaches them to an outbound e-mail message. Agencies can choose to see reports of such activity on a periodic basis or set thresholds so that alerts appear in real time.

Where it fits in: The opposite of a firewall, which controls inbound data traffic, DLP systems provide a barrier to stop sensitive data from leaving an organization. Consisting of software or small hardware appliances that plug into the network, DLP systems can help agencies classify data according to its sensitivity and then impose the appropriate agency policies, ranging from simply tracking data's movement to encrypting data and restricting access.

Where it falls short: DLP can compromise network performance, especially if agencies configure them to monitor traffic in real time. That approach can stop a policy breach before it happens, but that level of oversight can also cause frustrating delays in daily operations.

In addition, some DLP systems can't read or intercept messages encrypted with Secure Sockets Layer or similar data-scrambling technologies. That makes it difficult to stop someone from encrypting an e-mail message and sending it to an outside destination. “If you don’t have the right technology in place to decrypt the SSL traffic, you are not catching probably a majority of the data” that’s leaving the network, Amsler said.

How to get the best results: The Nuclear Regulatory Commission has been using DLP for about a year to track how closely employees are adhering to information-protection policies. “The awareness piece is key when protecting against a WikiLeaks situation,” said Patrick Howard, NRC’s CISO. “We are seeing if employees are not protecting data appropriately — [by] sending information out and not protecting it with encryption, for instance.”

Partly because of network performance concerns, NRC avoided real-time monitoring and instead set its DLP appliances for listening mode so that it collects activity statistics and summarizes them in daily reports that IT managers scour for problems.

No matter how agencies choose to configure their DLP systems, they should set aside weeks or months for planning and preparation. They should begin by getting a clear idea of where the agency stores its sensitive information and then mapping the channels the data typically follows during daily operations.

Agencies shouldn’t rush that step. NRC conducted a six-month pilot project to better understand how its data flowed. In return, security staff members will be able to concentrate on high-impact data storehouses without being overburdened with less critical information, such as constant alerts about downloads of data that is already widely available to the public.

Finally, agencies must not neglect cultural considerations. Officials might need to sit down with employees and even union representatives to explain the security justifications for a higher level of oversight and calm fears that DLP is a tool for spying on workers.

Network analysis and visibility

Designed to be a watchdog of all network activities, network analysis and visibility (NAV) tools can also perform deep-packet inspections, which give managers the ability to peek inside the outer shell of data packets to understand their content.

Where it fits in: Although some aspects of NAV and DLP applications are similar, NAV provides important additional capabilities. DLP keys in on data and files as they’re about to leave the network. But with NAV, agencies can spot suspicious activities before they reach that point — such as when a sensitive file is being downloaded to a PC in an unrelated department.

That eye-in-the-sky aspect of NAV facilitated one of its traditional uses — capturing traffic patterns for forensic information in the event of a security breakdown. More importantly for proactive security, NAV applications can combine packet inspection with content filtering to identify and block sensitive documents before they’re saved to unauthorized workstations or storage devices.

For example, security managers can zero in on particularly sensitive areas, such as Social Security numbers in the body of a document. In contrast, many DLP products don’t delve into that level of detail and only look for the general headers that document creators attach to files to describe their sensitivity.

Depending on the NAV toolset and level of control an agency decides to implement, the monitors might compile daily summaries or work in real time to block suspicious traffic that appears to be in violation of agency rules.

Where it falls short: As with DLP solutions, NAV's stringent traffic and content analyses can introduce performance delays. “Anytime you have something working in-line, network people are going to get queasy,” Amsler said. Balancing security and cost is also tricky because, in addition to technology, agencies might need to assign a security analyst to manage NAV activities on a daily basis, he added.

How to get the best results: Agencies should spend time determining what data to key in on and then tune the system to reduce false positives. “The first report might not be exactly what you are looking for,” said Marian Cody, CISO at the Housing and Urban Development Department. “Once you start pulling data, you’ll find there’s tons of it, so you have to figure out what’s noise and what isn’t.”

Filtering out less critical information will also help security specialists avoid spending too much time on any particular threat. “Risk management is huge,” said Kevin Cooke, HUD’s deputy CIO. “You need to make sure you are protecting yourself against the most likely threats.”

Security information and event management

Security information and event management systems collect data from the event logs that routinely record who is using resources in the IT infrastructure and what they’re accessing. SIEMs then use the log summaries to issue alerts when an activity points to a possible security violation. SIEMs also store log data for reporting and auditing activities.

Where it fits in: Sometimes called a manager of managers, SIEM applications can aggregate activity data from DLP systems, NAV applications and event logs into one central console for easier viewing.

“This is part of continuous-monitoring activities, where agencies are able to determine that somebody is plugging a new device into a computer to potentially take data off the network that should not be taken off,” said Matt Brown, vice president of information assurance services at Knowledge Consulting Group.

Managers can set thresholds for when to be notified of an anomaly, such as a large transfer of sensitive data to a particular workstation. They can also use SIEM analytical tools to determine appropriate responses and oversee incident handling.

The alternative to an SIEM is having someone perform the daunting task of watching traffic patterns throughout the day and making on-the-spot judgments about when to issue an alert. With an SIEM, managers can focus on the alerts to decide whether activities represent a real threat or are within acceptable parameters.

Where it falls short: As with DLP and NAV, fine-tuning SIEM systems can take some work to ensure that agencies see what’s necessary to mitigate risk without being deluged with less critical log reports.

How to get the best results: Before the system can spot anomalies, agencies must develop a baseline of levels of bandwidth use, traffic patterns and other activities observed during normal operations.

“That’s not technically difficult, but it takes a lot of work,” said Steve Carver, an information security consultant at Aviation Management Associates and former information security manager at the Federal Aviation Administration. The process requires a week or two of documenting traffic flows. Or agencies could hire outside consultants who specialize in network analysis and optimization.

Endpoint protection

Data encryption has been the go-to technology for securing data on laptop PCs, mobile devices and thumb drives that move in and out of agency offices. If the devices are lost or stolen, precious information stays safe from prying eyes.

But there’s a downside. Staff members able to encrypt data probably have the keys to decrypt it, making it possible for them to release the information to the outside world. In addition to encryption, agencies need to secure so-called endpoints — the interfaces that channel data to USB thumb drives, CD/DVD recorders and other portable media.

Where it fits in: Using a central server, security managers can use endpoint security software to document when data flows to endpoints. They can then configure individual ports to deny all access, allow read-only access or permit full access to information.

Where it falls short: Locking down interfaces can ensure that insiders use only authorized and properly encrypted devices, but it might not fully plug a leak. For example, an endpoint security system might successfully authenticate a device and a person who later inappropriately discloses the data.

How to get the best results: For extra control, administrators can set the endpoint security technology so that insiders must log in to the system and verify their identities before a port opens. Some agencies also require a manager’s approval for downloading data to removable media.

Taking a measured approach to port lockdowns will help users accept security measures. For example, setting a USB port to read-only allows workers to listen to music on their iPods without being able to download sensitive files to the devices.

Finally, administrators might want to adjust acquisition policies to stem opportunities for outbound data flows — for example, by not automatically including USB interfaces and DVD drives in PC and workstation orders. For some employees, such as IT administrators who upload new software, those accessories will be necessary. But limiting the number of devices to oversee will make the security job more manageable.

Rethinking data-sharing practices

Another possible response to the specter of insider threats is to revise policies to more stringently limit access to information. In the wake of WikiLeaks revelations, that approach might become more prevalent at agencies. Late last year, Defense Secretary Robert Gates signaled an era of tighter controls at the Defense Department when he told a reporter that the WikiLeaks incident showed the aperture had gone too wide when it comes to data sharing.

Certainly, establishing tighter need-to-know policies could be part of any solution. But those moves must be weighed against the risk of introducing blunt-force security measures that hamper an agency’s internal operations or cross-agency data sharing.

Appropriate access to information is necessary even though agencies might never be completely safe from insider threats. “You may never come up with a 100 percent, foolproof technological solution,” NRC’s Howard said. “You are always going to need to rely on your users to a certain degree, and that’s a balance that you have to strike.”