Continuous monitoring means different things to different people. NIST describes it as "a risk management approach to cybersecurity that maintains a picture of an organization’s security risk posture, provides visibility into assets, and leverages use of automated data feeds to quantify risk, ensure effectiveness of security controls, and implement prioritized remedies."
Continuous monitoring is a concept of monitoring to create data that creates action and a continuous feedback loop of risk and mitigation. It is a set of technology driven processes helping to fulfill a core organizational requirement. It is an initiative based on a standardized reference architecture, focused on the security controls in NIST SP 800-53 and CAG Critical Security Controls. The longer-term goals are to develop a component-based approach where solutions from multiple vendors can be combined together to effectively provide better visibility into the security activities and posture of the overall network.
It’s really both. To do it effectively, you must have the right technologies in place and configured correctly. This includes scanning tools, audit log tools, vulnerability management tools, GRC solutions, etc. You have to know where they are in terms of the infrastructure and what the goal is for them. But you also have to have the organizational processes that can help you make sense of the data these tools are collecting.
I think one of the major pitfalls in continuous monitoring is that people think monitoring is just implementing certain technologies. How we see it is that, while there’s certainly a big technology piece to it, there also has to be the organizational processes that enable you to understand what you’re trying to accomplish with continuous monitoring in terms of understanding your risk and ultimately reducing that risk to make it effective.
There is no one size continuous-monitoring-in-a-box solution. It’s an evolving set of tools and processes allowing network staff and agency management to view the overall health of the connected environment. It will help them to make the changes needed to better secure their environment. Continuous monitoring goals push organizations to do a better job of the security basics, and to engrain those into their day-to-day operations.
It depends. Different agencies have different risk profiles based on the mission of the organization and the types of data that traverses the infrastructure. So I think the capabilities, tool sets, and resources for analyzing the data are all different depending on the size, data types, and organizational mission.
Measurement is a critical part of continuous monitoring. Since the intention is to move an organization in the right direction, staff needs concrete data on the state of a network’s security components. Someone owns those resources and needs to be held accountable for the performance of the continuous monitoring program.
For example, you may want to measure the number of vulnerabilities detected on a host, the time it takes to field critical patches, deviations from required security compliance settings, how often AV is updated and run, etc. Those are just a few of the potential components in a larger scoring model.
Regardless of what is scored, the scoring model needs to be consistent, mathematically rigorous and motivational for those who own the scored network resources.
We recommend a set of metrics based on the specific organization’s risk profile, from asset identification to acceptable levels of risk, to baseline and then quantify threat identification, mitigation and response, thus continuously reducing overall risk, which leads to resiliency.
You have to give some thought to what you are trying to get out of it from the perspective of security goals, and then define metrics that are tied to those goals. Those goals should also be security focused, as opposed to compliance focused. For example, using the number of assessments or contingency plans you’ve completed doesn’t help to measure the effectiveness of continuous monitoring. Instead, you should look at the length of time vulnerabilities are open or the response time to deal with incidents. Ultimately, the proof of an effective continuous monitoring program is the reduction in the number and severity of incidents.
Everyone complains about not having enough money, but most organizations already have what is needed to get started including AV, capabilities to patch operating system and application software vulnerabilities and SCAP enabled products to evaluate FDCC/USGCB content.
Budgeting for a continuous monitoring initiative focuses on the specific goals the government is trying to accomplish. Organizations would prefer if they got a budget as a separate line item, but more than likely funds will be shared among programs. Then it becomes a prioritization process based on where they are in their current implementation of continuous monitoring. If DHS and NIST want an agency to implement continuous monitoring differently from how it’s been done in the past, then obviously the more funding they have for it the better it will be.
Budgets affect every piece of a continuous monitoring program, certainly from an implementation perspective. There’s a heavy reliance on technologies and the data they process, and unless those technologies are already in place it will require a large capital investment to purchase them and implement them in an effective manner.
With a lot of our customers, those technologies may already be in place but only some of their capability is being used effectively for continuous monitoring, so there has to be a gap analysis of what features can be implemented and what can’t. If they don’t have the money for any more investment in technology, then they will have to put more of their planning focus on trying to understanding the risks to the organization and to build in compensating controls.
To make the most of the funds agencies have available, we recommend they have a continuous monitoring program in place for their entire system inventory, but tailor the monitoring according to the level of risk they ascribe to each system. For some they may decide its acceptable to test them less frequently, or not to have eyes on them all of the time.
The simple answer is it adds complexity into the equation, due to configuration issues as well as becoming the focus of new threats and exploits which open up new vulnerabilities that must be managed. Virtualization technologies also connect to network infrastructure and storage networks and require careful planning with regard to access controls, user permissions, and traditional security controls. When you acquire security and virtualization products, you need to consider how they impact one another. Agencies should consider the gaps in their security posture that virtualization will expose and review their security architectures, policies and processes in order to implement strategies covering people, process, and technology that bridge these gaps.
Virtualization introduces a level of complexity as far as systems risk is concerned that requires an organization to clearly understand the decisions that must be made regarding whether to virtualize and what the impact will be. Those decisions will dictate whether a system is a good candidate to be put into a virtualized environment, or if they are not willing to accept the risk. So it all comes down to the level of risk tolerance the organization has, and understanding how virtualization affects that risk.
Moving to the cloud involves taking a leap of faith. The point of moving to the cloud is to transfer responsibility for the system in question. While agencies can transfer responsibility, they cannot transfer accountability.
When they make a decision to move to the cloud an organization must understand the security posture of the environment they are going into. They have to understand how to validate how the cloud provider performs continuous monitoring, how often they monitor, what the monitoring looks at and who has access to the data the monitoring produces. Then they have to make sure that the contract with the cloud provider adequately addresses the level of assurance they require and that it understands the risk to the organization’s data.
Great strides have been made in security automation in the past few years. SCAP efforts are opening up siloed security information locked inside individual security products. This information needs to be collected and made much more actionable than exists today.
Vendors need to develop the means to share security information. To do so vendors need to participate in security standards efforts such as the SCAP community work and international efforts such as the IETF. Agencies should expect vendors to show they are embracing real standards and the efforts behind them, both in their participation and their products.
Overall, a vendor providing continuous monitoring needs to demonstrate an understanding and a vision for cyber security, with the tools and presence to detect, mitigate and respond to threats, while measuring progress and constantly improving. This is a combination of threat mitigation at all points from network edge to endpoint, with detailed reporting and threat data as part of the scoring and feedback loop.
Analogous to diet and exercise, continuous monitoring is not one tool, but a network lifestyle for greater resilience.
In terms of cloud providers, I think agencies should expect the provider to understand that it’s been asked to provision for continuous monitoring, and that it has contracted with a Third Party Assessment Organization (3PAO) to perform continuous monitoring and that organization does, in fact, consistently monitor and assess risk.
In terms of a continuous monitoring provider they should expect companies like us to help them develop both the strategic and tactical goals for implementing continuous monitoring and to help develop the program itself. They should also expect that the company ensure there are measures in place to enable it to report on the implementation and effectiveness of the continuous monitoring program.
At McAfee, we take our role of protection very seriously, and continuous monitoring is key to the future of cyber security. That means ensuring systems are sufficiently easy to implement and use that they become part of a culture of cyber security leading to resilience. Better detection leads to better mitigation, faster response, and eventually increasing the frequency of thwarted attacks thus decreasing the need for response.
We frequently find significant attention is focused on the technology to the detriment of the people side of things. It’s vital that all involved in continuous monitoring, from senior management to systems administrators, have access to training and educational material about the technologies and the program. They need that to understand the plan for the organizational process changes required to manage any new solution and its implementation.
That’s a pretty generic statement, but the reality is that continuous monitoring requires buy in from all aspects of the organization that will be touched by the program. People need to be trained appropriately and many of those aspects won’t involve IT personnel.
It shouldn’t be limited to just IT personnel. All users have an active role in the protection of organizational data and information security, so they also need to be included. Something that we advocate is role-based training including for those people with security-specific responsibilities, as well as for general users. Training should then be tailored to those responsibilities and to the role the individual performs in the organization.
So, if you’re a general user it should be focused on awareness and good information security hygiene. If you are a security officer who is doing the scanning and reviewing the results and audit logs for various systems, or a security assessment professional who needs to know about the latest vulnerabilities and what technologies are available for analyzing risk and so on, then the training has to be different.
A role-based training program is appropriate for everyone in the organization involved in continuous monitoring.
The whole nature of continuous monitoring is pretty complex, with NIST calling for agencies to both monitor for suspicious traffic on the network as well as to deploy far more proactive and comprehensive means of assessing exposed vulnerabilities and the underlying points of risk.
Uncertainty is caused not only by the complexity but also by the number of different mandates being placed on agencies all at once. Each has some overlap, but they all address the overarching problem of cyber security. The pushback is by agencies trying to get a better understanding of the individual requirements so they know what needs to be provided to whom and for what.
However, some of the pushback is because there’s no penalty for non-compliance. Most agencies have some sort of continuous monitoring in place, but the degree of robustness can be debated. They don’t have the budget to comply with unfunded mandates, and so those don’t reach the top of the priority list until there’s a substantial penalty for non-compliance.
That being said, are the mandates themselves realistic? They probably are, but agencies just can’t execute off of them.
The biggest reason there’s been so much confusion is that there’s really no single definition of continuous monitoring across government, so each agency has interpreted what continuous monitoring means to them. Some believe it’s a simple implementation of Big Fix or ArcSight or some other technology, but ultimately it requires a very large cooperative effort from more than just the security group. It does require the agency itself to make sure the planning and coordination is done so that the organization can implement continuous monitoring effectively.
But I do think there needs to be more consistency in terms of what elements are required for continuous monitoring. Certainly, with enterprise programs such as FedRAMP and some of the larger .gov security initiatives that are evolving, they should provide some additional clarity and guidance as to what continuous monitoring incorporates.
It’s not a “standalone”. It’s really a set of organizational processes and technology providing visibility into an organization’s health from a security perspective. No one can sell you a single product for solving all your continuous monitoring needs. It’s an orchestrated set of security tools providing the information needed to determine how the organization’s network security either improves or deteriorates.
I don’t think you can do it as a standalone program in terms of you just saying this group over here is responsible for continuous monitoring and that is all they do. It needs to be coordinated across the organization because it does include both technology and organizational processes and responsibilities that have to work cohesively together.
It’s important to leverage the things that agencies already have in place, be they technology or organizational processes. It’s also important to find what’s missing and bring that into play so you know you have the right continuous monitoring response based on risk tolerance. But it can’t be standalone, it needs to be part of the whole picture that includes what’s there today, as well as a gap analysis of what else is needed.
We recommend customers always, always focus on risk management. Every step of implementation and use should focus on mitigating risk.
Before starting it’s imperative to know what continuous monitoring is, what it’s not, and how to leverage tools already in place to provide the monitoring needed. Organizations have many of the monitoring-specific security tools within their networks. They also have log management and SIEM systems to tie the data together for better risk posture and event management. These components, along with strong monitoring policies, are instrumental in continuous monitoring efforts today and in the future as new requirements arise.
What to monitor, how to monitor and where to monitor can mean almost anything to an IT department. It’s important to determine what needs to be monitored and set monitoring policies around those specific needs.
Technology too often ends up being the focus. NIST guidelines for continuous monitoring contains a pyramid depicting three tiers, mission/business process, organization and operational IT levels, with information systems supporting the other two levels. Above IT at the organization level, governance strategies and structures for assessing and managing security-related risk must be created. Above that, missions and business processes must be prioritized according to an organization’s objectives and goals, and a strategy for identifying and protecting critical information must be created.
Risk assessment, evaluation, treatment, acceptance and monitoring strategies must be developed with multiple parties, including non-technical business managers. And make sure to get buy-in from the senior stakeholders in all the various parts of the organization. Those that own various aspects of the network being monitored and measured are critical.
Poor planning and, consequently, poor implementation. Also, deploying too many tools. A lot of organizations have looked at continuous monitoring as something that’s technology-specific, and so they’ve tried to implement a number of tools as their continuous monitoring strategy without properly identifying what data they are receiving from those tools and how they are going to use them to either understand or reduce risk.
From a technology perspective it can lead to the tools not being properly configured, or agencies implementing them in a way that is not part of a bigger picture of continuous monitoring. It also leads to trying to solve too many issues too quickly and not using a really a well thought out strategic, risk-based approach to implementing continuous monitoring.
Because of the nature of the continuous monitoring initiative NIST and DHS are promoting, it will have to be done in phases. It’ll take time for the standards to advance to where we can get to the component-based approach that’s envisioned. Agencies will have a design phase, which will use gap analysis to look at what they already have in place and what else is needed, with potential procurement needs defined through that. There’ll be a deployment phase, a training phase, an organizational awareness phase, and finally an ongoing refinement and enhancement process.
And you may have done all the previous phases on a specific portion of your networked resources, such as desktop community and then need to repeat the effort on another targeted section of networked resources.
The first essentials are to establish what your agency hopes to achieve from the effort and to make sure all involved understand those goals. Understanding what assets are already deployed and the consistency of that deployment is important. If you have many different products doing the same job in different areas of the network, you may have a tougher time aggregating all the information into a single source.
It can be done in phases. It’s absolutely essential to start with an understanding of what the agency’s security goals are, what the risk tolerance is, how they currently monitor vulnerabilities and in what order and areas they’re going to improve monitoring.
One of the pitfalls of continuous monitoring is that people jump into it with technology without understanding what their organizational security goal is and what they want to get out of continuous monitoring. It’s important that an organization really looks to define what continuous monitoring is for them, then they can build a strategy around that. Continuous monitoring can be done in phases, but you need that strategy to have a clear idea of the benefits to be derived from each phase. In that way, one phase builds on another.
Reporting, logging, and a SIEM with augmented intelligence feeds can help take a flood of data to a refined set of events that actually matter. That helps humans and the machines to find and act on the best information.
Continuous monitoring doesn’t require everything – all systems applications, network endpoints, infrastructure, security processes etc. – be monitored everywhere and at all times. NIST identifies a three-tiered system of low, moderate and high impact to use when developing a monitoring policy. Once you’ve determined what systems and processes need monitoring, the policy should include the events that would trigger these systems to send alerts. The monitoring system should be set to look for correlating activities that show what is and is not important in these.
The ability of the monitoring system to be content and context aware of the aggregated data is imperative. It allows operators to make sense of the large amount of data being reported, and to take action on the most critical ones while also not missing those that could provide context if aggregated with other events.
I think this is one of the biggest obstacles to continuous monitoring. Obviously, to make sense of large amounts of data you need to understand what that data is and have the ability to review the data, whether that’s in the form of some sort of consolidation or correlation engine, or if it’s humans reviewing it. And even if you’ve made the investment to purchase or implement a correlation tool, there still needs to be resources associated with going through the reports, understanding the vulnerabilities and alerts and weeding out the regular events from the true security incidents you are looking for.
Without defining what the goals are for the continuous monitoring program and how all of the various pieces fit together, you can be quickly overwhelmed with a bunch of data feeds that you don’t know how to handle because you haven’t done that legwork on the backend
The data have always been there, they’ve just been a secondary focus since the two approaches agencies use to formulate their security plans are network first, which looks to providing defense at the periphery, and then the data. Many agencies interpret the new continuous monitoring regulations as a mandate to lock down all endpoints on the network which is a complex task involving firewall deployment, network scanning, configuration and patch management at the device level.
The data-centric approach focuses on the data and where it lives. Data-centric approaches protect data by identifying and fixing database vulnerabilities before they can be exploited. A large agency may have hundreds of database instances. Continuous monitoring at the database level is a much more automatable and manageable task.
In addition, the use of the term “data” involves “telemetry” and that’s required for both approaches. The old adage, “You can’t manage what you can’t measure” and “In God we trust, all others bring data” is very true. The data produced is the foundation for measurements and metrics. Data provides the organizational ability to put in place a framework for better decision-making based on expectations and what’s observable. The more data there is about the state of the security devices, segments, locations, geographies, etc., the easier it is to see what needs to be done and where to target the efforts.
Security data is absolutely critical. You have to understand the security posture, where the vulnerabilities lie and the types of incidents that are happening in your infrastructure. But data alone is not as effective as it could be unless you define the context for reviewing the data. Only then can you use the data to make decisions on risk.
So, organizations have to spend the time to define the monitoring technologies they have that are feeding them the data and ask themselves how it’s being reported, who’s monitoring it, who’s responsible for taking action, where the existing holes are and how those will be closed.
That’s the challenge in defining continuous monitoring. You must be able to sort through the data and understand how the pieces fit together so that you don’t become overwhelmed or stovepiped with focusing on one data stream or element. That’s the only way to make sure you are using the data you have in the most effective way possible to understand and reduce risk.
Agencies need to ensure they have an overall strategy for addressing change because no one operates in a static environment. The key is having security components that fit into an overall approach to data integration, providing actionable intelligence.
Agencies need to make sure the underlying security products operate within a common, standards based information framework. Agencies need to adopt standards wherever possible so they have a choice in future procurements. Otherwise they’ll get locked into proprietary solutions. Agencies should build their continuous monitoring program so it fits with the reference architectures both NIST and DHS are basing their efforts on. This allows agencies to evolve and change as the government-wide continuous monitoring initiative advances.
One thing we advocate is that, as systems are initially authorized, continuous monitoring plans should be a part of the authorization package and that the authorization plan should be managed by the information systems security officer. The Compliance and Risk Management program is then responsible for reporting on the effectiveness and implementation of that continuous monitoring plan. That allows for a tailored continuous monitoring plan to be implemented system by system, depending on the complexity and the level of risk each system presents to the organization.
8609 Westwood Center Drive, Suite 500, Vienna, VA 22182-2215 703-876-5100 © 1996-2016 1105 Media, Inc. All Rights Reserved.
8609 Westwood Center Drive, Suite 500Vienna, VA 22182-2215 703-876-5100
© 1996-2016 1105 Media, Inc. All Rights Reserved. This copy is for your personal, non-commercial use only.To order presentation-ready copies for distribution to colleagues, clients or customers, visit: www.1105Reprints.com