If you want virtualization success, look to storage

Storage, which has become a secondary consideration in many physical infrastructures, is the achilles heel in virtualization. Contention issues from competing resource demands put the focus on storage I/O, and they will only increase with desktop virtualization. Managing this requires a multi-layered view of the virtualized infrastructure.


Storage presents an interesting conundrum for agency IT managers when it comes to virtualization. Because the characteristics of networks, servers and client systems in the physical infrastructure are well understood, matching storage up with requirements has become a relatively mundane exercise. Not so in the virtualized infrastructure.

“Storage is still the Achilles’ heel for any virtualization solution,” said Paul Schaapman, a solutions architect at CDW-G. “If you don’t design it properly, with the right amount of [input/output operations per second] built in to match the demand of the server groups that are going to be put online, then you will get in trouble.”

Initially, people building virtualized environments didn’t bring IOPS into their considerations, he said. But they now realize that, as you lay down virtual machines on top of targeted local-area networks, you’ve got to make sure that they are optimized for the storage that’s going to be needed.

“Peak demand is what you are really architecting for,” Schaapman said.

The problem for agencies that are virtualizing is that, as they go through data center consolidations and provide shared services across jurisdictions, they start getting different types of workloads that normally would have had their own resources dedicated to them that are now competing for access to those same shared services, said Augie Gonzales, director of product marketing at DataCore Software.

What that does is produce very different behaviors from those found in the physical environment, with requests for resources colliding with one another in a way that’s completely different from what classical design criteria would lead you to expect.

“With those, you would isolate the workloads and be able to dial in exactly what was needed for them and not have to worry about all the other noise,” Gonzales said. “In the virtual and consolidated world, the sandbox is exposed, and there are no walls to stop one thing from encroaching on something else in the sense of I/O requests and how quickly these get to shared disks.”

This is forcing a turnaround in thinking among IT professionals. Several years ago, they had what Gonzales called a naive approach to architecting storage as the last effort in putting a solution together. They felt they could simply plug in whatever capacity was needed, at the lowest cost.

“But they’ve been surprised how much storage has to do with speed and how fast responses are in the virtualized environment and what it means to the reliability of results,” he said. “So they’ve learned that part of the transition from the physical to the virtual world is that success is driven by what kind of shared storage infrastructure you put together for both environments as you are cutting over.”

It comes back to the issue of avoiding contentions, and that comes down to the key areas of how much network bandwidth is available, how much CPU power is available, the size of the cache and storage I/O. Make sure you know what applications are being put into the infrastructure and how users access them, and then make sure you’ve got enough resources available to handle the peaks in demand.

That will become even more important as the number of virtual desktops increases. At certain times during the day — for example, in the early morning — a large number of people will “boot up” their virtual desktops all at once, causing a rush of requests to the servers that provide their desktop images. Get that wrong, and you’ll have a bunch of angry users flooding IT with complaints about not being able to get their work done.

“If the storage layer and application layer are ignorant of the virtualization technology, the [storage-area network] or [network-attached storage] array or whatever storage you are riding on top of becomes the critical bottleneck,” said Peter Doolan, group vice president and chief technologist at Oracle Public Sector. “The virtualization layer has no clue why that I/O is coming in. All it knows is that some [application programming interface] has been called that mimics an I/O to a disk drive, which means that the customer has to compensate for that architectural weakness by having a highly efficient I/O subsystem that can take into account the [virtual machine] broadcast storms.”

The way you mitigate those I/O storms in virtual environments is to engineer around the problem, he said, “by having the biggest I/O pipe possible to your I/O subsystem.”

Another form of virtualization also comes into play with all this, which is virtualization of the storage itself. Most organizations will have the physical storage needed to handle their virtualization needs, but in order to make it fully capable of addressing those needs, it will also have to be virtualized. That’s done through the use of a storage hypervisor, which, like the server and desktop equivalent, is software that brings together an organization’s storage assets under a central administration that then parcels out capacity as required.

This all adds up to an increasingly complex environment that can be tough to manage to make sure that end-user performance doesn’t degrade. It’s important for administrators of a virtualized environment not to get too attached to just one part of it, said Leena Joshi, senior director of solutions marketing at Splunk.

“If you only look at the virtualization layer metrics, then you are probably looking at some metrics that are incorrect,” she said.

If you want to know how much memory is being used by each virtual machine and you are only looking at what the hypervisor is telling you, she said, then it will be an incomplete picture because the hypervisor won’t count things such as cache memory. If you are running an application that is a very heavy resource user, such as a database, the cache memory of the database is not included in the active memory reported by the hypervisor, “and then the virtualization administrator and the owner of the application will be at loggerheads.”

“The application owner thinks he doesn’t have enough memory, and the virtualization manager says, ‘You guys are conning me. I gave you so much memory, and you’re not using it,’” she said. “So you can’t just look at it as a silo problem or just one of the virtualization layer. You have to look at the operating system metrics, the application logs, the underlying storage, the underlying network metrics and so on, and then you need something that can pull all of that together.”

Depending on how you look at those things, storage and storage virtualization could take care of a lot of the performance problems in the virtualized infrastructure. According to that theory, because the applications and workloads operate within virtual machines and are moved from one physical server to another through the storage mechanism, the virtualized storage infrastructure becomes the traffic cop for those cutovers from one machine to another.

“Anything that has to do with dynamic resource scheduling, with failover, with live migrations between environments and so on, that’s all mediated by the storage virtualization layer,” Gonzales said. “That, again, points to storage being the central influence on the outcome of an organization’s virtualization efforts.”


About this report

This report was commissioned by the Content Solutions unit, an independent editorial arm of 1105 Government Information Group. Specific topics are chosen in response to interest from the vendor community; however, sponsors are not guaranteed content contribution or review of content before publication. For more information about 1105 Government Information Group Content Solutions, please e-mail us at GIGCustomMedia@1105govinfo.com.