It’s a very flexible term, and at the moment it probably means more in the commercial space than in the government marketplace.
It originally started out several years ago as a way to describe the efficient use of power and cooling, and there was a lot of talk about going green and paying attention to how much energy was consumed. Then virtualization took hold and all of a sudden there were assessments around the utilization rates of IT assets, particularly those of servers in the data center. And, today, it’s been expanded even further to also cover such things as storage and networks.
The problem in government is that there’s no one person who owns the budget for all of the components involved in data center efficiency. In large commercial concerns you can find one person responsible for the spend on facilities, cooling, power, IT assets and so on, and it’s therefore relatively easy to implement change. In government, the responsibility is very diversified. The CIO may own the IT assets, for example, but not the facilities or the cooling and power.
In government, the challenge comes with bringing all of this together at the highest level.
Very knowledgeable. There are some absolutely brilliant people in important roles that affect the IT architecture that’s either already been implemented or that’s being considered for the future. The CIO position has gotten stronger in government agencies over the last decade, and there have been lots of commercial best practices published in this space that are recognized by the federal government as opportunities.
But there are challenges. One is the decentralized control of the dollars. Another is how funding in government is organized and that you can’t use operations and maintenance money to buy capital equipment. And then there’s the inherent separation of the network and other assets, depending on the agency mission.
Despite all of that, a tremendous amount of progress has been made in the past couple of years, and given the movement and direction we see I think there’ll be other great strides made next year and beyond.
They are things such as the cost of facility, cost of power, location of facilities and the inefficient cost of servers as that relates to asset utilization. Server virtualization, which will improve server utilization, is well under way and there are many mandates in place to push people to even higher server utilization. There are still significant opportunities, however, particularly in the use of power and the locations of the data center facilities themselves.
You have to consider the whole. Organizations initially went after servers because they spend a lot of money on them but they were getting very low average utilization rates. So, improving utilization translated into a big business benefit. But, if you look at data center efficiency in the truest sense, you have to consider how servers interact with the storage and network infrastructures, and how the software works across those platforms to support the enterprise.
Agencies could and should contemplate this big picture when they are considering improving data center efficiency. However, the practical reality of either the mission they’re responsible for or the available budget is what drives people to a more incremental approach. Given the decentralized nature of spending and the fragmented enterprises they have to deal with, that’s probably all they can do at this point.
First, apply virtualization not just to servers but also to storage devices and to the network infrastructure. Then a move to the cloud, and there’s a lot of evidence that the cloud is providing commercial-like return on investment for government. And, finally, overall consolidation of the data centers themselves.
But perhaps the most important thing is for leadership to be applied to solving problems and to take advantage of what’s available now. Brocade, for example, has just introduced what we call Brocade Network Subscription, which is where we provide options for customers to build out their network infrastructure. They effectively rent the infrastructure, which can easily scale up or down depending on their requirements. That’s a recognized commercial best practice.
It is primarily an IT responsibility, but it depends on the situation. The Defense Information Systems Agency (DISA), for example, provides enormous capabilities to the DOD through its data centers, but elements across the Army, Air Force, Navy and Marine Corps also have a role through characterizing what their missions needs are and how that plays into the service level agreements they need, the level of support they need. I would say that’s a shared responsibility.
Brocade has a significant R&D budget that’s been in place for more than 15 years and which we’ve been using to develop our LAN, SAN and data center products. Through that investment we enable government customers across the federal government to reduce infrastructure and operational costs.
We’ve introduced Brocade Network Subscription as a way to talk to government customers about how they can reduce their infrastructure and operational costs. We’ve done a lot of things to lower power consumption. Brocade has a fantastic range of products for the whole Ethernet environment. And our technology scales according to mission and data center needs, something that is highly valued by government customers.
Also, the products we bring to the market are very open, so we don’t lock customers into one way of doing things.
The government is going to make use of both public and private clouds because it will never move its entire infrastructure to solely public clouds. How it does that, however, will depend on application workload and mission. The closer you get to the intelligence community or to the warfighter, the more restriction you’ll have on using commercial best practices. Also, the more likely they’ll be to either stay inside the private cloud, or just deploy traditional data center technology to support the mission.
It does play a part, and in a significant way. As you virtualize servers, storage and networks you are completely changing the paradigm of computing that has been established over the past 20 years. So, the software and applications used in this new environment have got be more aware virtually. The management tools have to grow and mature, and they have to address scaling needs. VMware today allows for the virtual allocation of processors or servers to meet spikes in requirements. But the capability to allow that to happen across a completely virtualized and efficient data center still needs a lot of work.
Generally, there’s a lot of work that has to be done to redesign software. Traditionally you have a database that sits on a server that runs a particular operating system that talks to various storage devices that sit on a network and operate in certain, specified ways. Each of those assets – database, server, storage and network – is generally acquired individually. But tomorrow, via cloud and other means, these technologies will have to be integrated in such a way that just one service is provided to the user.
That’s actually one of the problems with data center efficiency. The CIOs at organizations have gone and bought hundreds of products from best-in-class companies and deployed them in the data center, but the effort that has to be expended in managing them all is disastrous from a cost and efficiency standpoint. So, the way future engineering budgets are allocated, and the way that companies partner to provide cloud services, will be a big deal.
We have a technology that allows us to increase the speed between devices and increase the overall capacity of the infrastructure according to changes in performance requirements. And that addresses a perpetual problem in government, where the capacity requirements are growing dramatically every quarter, it seems. So, no matter how good agencies are in planning their IT infrastructure needs, it does appear to be that capacity required often exceeds capacity on the floor. Our technology helps them address this issue.
What agencies do now is try to predict their exact capacity needs. They acquire infrastructure that enables that capacity, but invariably the actual needs are different from what is planned for and acquired. Brocade Network Advisor provides them with a better idea of their network flow and therefore allows for better management of the network. It provides alerts so you know such things as when applications are either running out of bandwidth or the network is slowing down. But we’ve had that for some time now, and other companies in this space have something similar.
The key message is that agencies should adopt those commercial best practices that allow them to scale their IT enterprises and their data centers. Commercial companies are investing their own R&D money in this, and to the extent that government agencies can align with those best practices that’s where they’ll get the best return on that investment. But if they don’t align with them, I think you’ll have a data center efficiency gap that will continue to grow, not close, over time.
What metrics they use depends on what they believe their particular efficiency challenges are. If it’s to do with power then they’ll measure power utilization and they’ll do everything they can to put assets into the data center or the enterprise that will bring power consumption down. If it’s to do with the cost of the facilities, they’ll measure cost per square feet. If it’s server utilization, they’ll measure the rate for servers currently on the floor and then drive utilization to meet whatever target they choose.
There’s no one metric for measuring data center efficiency. But there are a bunch of good ones and everyone’s using something, but no-one’s following one recipe book.
If you look at commercial best practices in operation you’ll see that there’s been a dramatic reduction in the total number of data centers in businesses. Over the past decade or so, the federal government has built a lot of data centers to where it has around 2,100 in place now, and there’s a plan to reduce those by some 800. That’s an example of the government adopting commercial best practices.
The other thing is that there’s an opportunity to share across agencies. If you take the example of state and local government, five years ago agencies there didn’t share many of their IT assets and data centers. Now, much more of that is happening, in particular with data produced by financial applications.
So, reducing the number of data centers, sharing assets and data, and then letting those companies that are already spending enormous amounts on R&D on building out IT capabilities do that for government agencies, so they can focus on their core mission. Those are three best practices that will immediately profit government organizations.
The biggest pitfall to avoid is getting locked into one vendor. The other is avoiding commercial best practices, because the further away you are from them the more likely you’ll be spending more money on your infrastructure than you need to, and you’ll be falling further behind on efficiency programs. And at all costs try to keep yourself in an open environment with companies that provide an open set of standards or interoperability across a wide range of OEM partners.
As to emerging technologies that could make a difference, the overall virtualization of the data center will have the biggest impact on efficiency. Organizations should be wary of vendor lock in and proprietary vertically stacked technology. Just as the movement of resources needs to be agile between data centers within the cloud confines, the ability to move within best of breed solutions as time progresses ensures an agency will not be locked into a single expiring solution that fails to meet business needs over time.
Working with vendors such as Brocade that place an importance on open standards and an open partner ecosystem ensures that the solution will adapt to evolving technologies as they mature and change. There are several emerging technology standards that promise to build more efficient data centers, including TRILL, SPB and VXLAN. Agencies should always evaluate vendor support for such emerging standards as they consider their future needs in the data center.
They need to have tools that manage everything from the use of facilities to the use of power, but more than anything it’s those tools that we need to deploy across agencies that will help them meet the goals they set for themselves. Whether that’s to get to the number of data centers they want, the server utilization they desire, the service level agreements they want provide to their users or management of the workload across the network. There are tools that will help them with that. To the extent they can decide on what their goals are, we can help them to meet those.
We’ll continue to be a company that partners really well across multiple OEMs, similar to the model we employ today. We’ll continue to be very focused on helping all of our customers drive costs out of their IT infrastructures by enabling efficiency. Today we provide products that leverage emerging technologies and services, such as Brocade Network Subscription, but at the same time we have a wonderful heritage in the fiber channel SAN environment and that also enables us to bring higher efficiencies into data centers.
We’ll continue to innovate, as we have over the past 15 years, around the network infrastructure business, and as we innovate we’ll continue to closely align with our OEM partners, who are the best in the world. The products that result from that will support commercial best practices, which in turn we’ll apply to the needs of the federal government and its data center requirements.
8609 Westwood Center Drive, Suite 500, Vienna, VA 22182-2215 703-876-5100 © 1996-2013 1105 Media, Inc. All Rights Reserved.
8609 Westwood Center Drive, Suite 500Vienna, VA 22182-2215 703-876-5100
© 1996-2013 1105 Media, Inc. All Rights Reserved. This copy is for your personal, non-commercial use only.To order presentation-ready copies for distribution to colleagues, clients or customers, visit: www.1105Reprints.com