Ditch the data center groupthink and consider a new efficiency option

Moore’s Law, the idea that computer processing power doubles every two years, might be the most spot-on prediction in the technology industry, but the so-called law of the instrument isn’t far behind.

Better known in its more colloquial form — if you have a hammer, everything looks like a nail — it aptly describes IT’s tendency to overuse familiar tools.

These days, server virtualization is the new IT hammer, and although it would be ridiculous to deny the technology’s positive impact, it might not be a catch-all tool for fixing every data center challenge. However, groupthink can make it hard to see when a potentially better solution for some problems comes along, especially if the new way is a conceptual U-turn.

That pretty much describes the situation with microservers, a new class of extremely energy-efficient computers targeted at server farms and big data centers.

Whereas virtualization has become synonymous with dialing back server proliferation at organizations by consolidating the collective data-processing workload onto fewer, more powerful boxes, microservers embody the opposite approach.

Definitions are still a bit squishy at this early stage, but a microserver typically puts a single, low-power processor, similar to the kind used in smart phones and tablet PCs, on a circuit card, with each one functioning as a stand-alone server. The technology densely packs hundreds of those cards into a single rackmount box that, depending on the design, provides communal power, cooling and network connectivity.

The design allows a microserver to match a traditional server’s processing muscle while using only one-fourth the power and space, according to microserver pioneer SeaMicro, in reference to its flagship product.

Power bills are one of the biggest expenses for data centers -- the math has some people positively bullish about microservers’ prospects.

“I seriously believe that the small, low-power server model will eliminate the use of virtualization in a majority of public cloud capacity by 2018,” writes John Treadway, global director of cloud computing solutions at Unisys, on his personal blog “CloudBzz.” “The impact in the enterprise will be initially less significant and will take longer to play out, but in the end it will be the same result.”

Observers agree that microservers won’t replace beefy, virtualized servers for single-threaded applications that require lots of number-crunching horsepower, roomy system memory and expandability. But they can be a good fit at data centers that provide server hosting or co-location services for which a dedicated physical machine needs to be allocated to a specific customer or application in a cost-effective manner, said Greg Schulz, founder and senior adviser at consulting firm Server and StorageIO Group.

Microservers are also well-suited to Web server applications that process lots of little independent transactions, such as system log-ins, searches and Web page views. System designers call that approach scaling out, as opposed to scaling up with a bigger, more powerful machine.

Facebook plans to start deploying microservers to handle those kinds of front-end Web transactions, said Gio Coglitore, director of Facebook labs. He spoke at an Intel industry briefing in March at which Intel officials said they are working on several power-efficient server chips designed specifically for microservers.

Coglitore said microservers can be more cost-effective for the giant social networking site than virtualization, which also has the unfortunate side effect of vendor lock-in because it adds a new software layer to the mix.

Some people say a diffusion of microservers might provide a better safety net for IT operations because it wouldn’t put all the eggs in one basket, which would happen when you virtualize onto fewer, larger servers. But others say that’s a false construction, and it’s trumped by responsible engineering.

“Do you want two bulls pulling the wagon or a thousand chickens?” asked David Cappuccio, a managing vice president and chief of research for the infrastructure teams at Gartner. “This is the old mainframe vs. distributed argument, and it’s still invalid — as long as your infrastructure is designed and managed the right way.”

In other words, there is no single right answer, only right solutions.

About the Author

John Zyskowski is a senior editor of Federal Computer Week. Follow him on Twitter: @ZyskowskiWriter.

The Fed 100

Save the date for 28th annual Federal 100 Awards Gala.

Featured

  • computer network

    How Einstein changes the way government does business

    The Department of Commerce is revising its confidentiality agreement for statistical data survey respondents to reflect the fact that the Department of Homeland Security could see some of that data if it is captured by the Einstein system.

  • Defense Secretary Jim Mattis. Army photo by Monica King. Jan. 26, 2017.

    Mattis mulls consolidation in IT, cyber

    In a Feb. 17 memo, Defense Secretary Jim Mattis told senior leadership to establish teams to look for duplication across the armed services in business operations, including in IT and cybersecurity.

  • Image from Shutterstock.com

    DHS vague on rules for election aid, say states

    State election officials had more questions than answers after a Department of Homeland Security presentation on the designation of election systems as critical U.S. infrastructure.

  • Org Chart Stock Art - Shutterstock

    How the hiring freeze targets millennials

    The government desperately needs younger talent to replace an aging workforce, and experts say that a freeze on hiring doesn't help.

  • Shutterstock image: healthcare digital interface.

    VA moves ahead with homegrown scheduling IT

    The Department of Veterans Affairs will test an internally developed scheduling module at primary care sites nationwide to see if it's ready to service the entire agency.

  • Shutterstock images (honglouwawa & 0beron): Bitcoin image overlay replaced with a dollar sign on a hardware circuit.

    MGT Act poised for a comeback

    After missing in the last Congress, drafters of a bill to encourage cloud adoption are looking for a new plan.

Reader comments

Mon, Jul 25, 2011 Jeff K Smith Rockville, MD

Server virtualization has largely been just another approach at moving data from this box to that other box. It missed the heart of the matter, the data, what resides in the box. That's not even done yet and we are embarking on another trend, the cloud- again, yet another approach for moving our data from one box to another box to avoid dealing with the heart of the matter- the data. Only difference is we'll no longer even know where our data physically is and who has control of it. The issue has always been understanding, harmonizing, and consolodating data and these moves have cost us a lot but have gotten no closer. But it's easier taking somethign you don't understand and moving it onto another box than it is learning to understand it. At least Federal Gov't CIOs have made lots of hardware venders happy playing musical chairs with their chaoitic data- moving it from box to box- But we're still no closer to understanding and harmonizing our data than we were a decade ago. We need CIOs who thinks about more than hardware- perhaps who think about he "Information" part of their title instead.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group