Labs explore Linux super clusters

Clustering's cost advantage has compelled some organizations to migrate from Unix-based systems and symmetrical multiprocessing machines. Hewlett-Packard Co.'s Alpha machines (originally from Digital Equipment Corp.) and Silicon Graphics Inc.'s (SGI) boxes are among those being exchanged for clusters.

"They're taking applications that in the past needed big iron to run — SGI, Alpha or whoever — and moving those down," said Paul Barker, vice president of marketing at RLX Technologies Inc., a Linux cluster software vendor.

Linux vendors are hoping to tap into this movement. Red Hat Inc., for example, aims to build a "targeted" Linux distribution adaptable to high-performance computing, according to Brian Stevens, Red Hat's vice president of operating system development.

The national labs are among the key stomping grounds for Linux clustering specialists. "The labs are a good place to be," said Tom Leinberger, vice president of sales at Aspen Systems Inc., which manufactures Linux-based clusters. But Aspen's strategy is to target smaller facilities within such organizations as the National Institutes of Health and the National Institute of Standards and Technology.

Leinberger said the company can assist smaller groups that don't have large numbers of in-house technicians to work on a clustering project.

Nevertheless, the labs are showplaces for high-performance computing models. Linux machines and traditional supercomputers are both put to the test at the Center for Computational Sciences at the Energy Department's Oakridge National Laboratory. The center provides a computing test bed for DOE and university scientists.

Oakridge recently contracted with SGI for a 256-processor system based on Intel Corp.'s 64-bit Itanium 2 chip. SGI's Altix 3000 runs Linux.

Initially, SGI will deploy a cluster of four Altix machines, each with 64 processors. That's because SGI currently delivers 64 processors within a single system, although the company expects to scale its Altix machine to 256 processors, noted Buddy Bland, director of operations at the center.

Bland said there are pros and cons with both the traditional supercomputing model and clustering. He said he finds that clustering offers a large degree of fault tolerance. On the downside, the "burden of managing all of the nodes individually is a bigger cost," he said.

Bland said DOE and Oakridge officials have been working on a number of projects to lower the cost of cluster system management.

Featured

  • Cybersecurity

    DHS floats 'collective defense' model for cybersecurity

    Homeland Security Secretary Kirstjen Nielsen wants her department to have a more direct role in defending the private sector and critical infrastructure entities from cyberthreats.

  • Defense
    Defense Secretary James Mattis testifies at an April 12 hearing of the House Armed Services Committee.

    Mattis: Cloud deal not tailored for Amazon

    On Capitol Hill, Defense Secretary Jim Mattis sought to quell "rumors" that the Pentagon's planned single-award cloud acquisition was designed with Amazon Web Services in mind.

  • Census
    shutterstock image

    2020 Census to include citizenship question

    The Department of Commerce is breaking with recent practice and restoring a question about respondent citizenship last used in 1950, despite being urged not to by former Census directors and outside experts.

Stay Connected

FCW Update

Sign up for our newsletter.

I agree to this site's Privacy Policy.