Linux clusters exploring universe
- By Dan Caterinicchia, Dan Caterinicchia
- Nov 06, 2001
Scientists at the Energy Department's Fermi National Accelerator Laboratory announced Nov. 6 that they are using clustered Linux supercomputers to help identify new particles that may reveal the building blocks of the universe.
Cluster supercomputing, also known as parallel processing or distributed computing, is a method of linking together multiple computers to form a unified, powerful system.
Fermi scientists are studying the collisions of protons and antiprotons to try to identify new particles that are produced as a result of the collisions.
Only a fraction of the millions of particle collisions per second are selected for further study, and the Linux clustered system helps to quickly identify unique collisions, said Gustaff Brooijmans, a Wilson Fellow at Fermi and project leader for the computing cluster of the experiment.
Brooijmans said that the clusters, provided by Linux NetworX Inc., are "more cost-effective than larger systems" for CPU-intensive tasks, and that "to select the most interesting particle collisions, it is essential that we have a high-performance computing solution that is reliable and powerful."
Fermi's 48-node cluster includes 96 Intel Corp. Pentium III 1.0 GHz processors, 48G of random-access memory, and a Fast Ethernet interconnect. Fermi also signed an ongoing service and support agreement with Linux NetworX, said Clark Roundy, vice president for the Sandy, Utah-based company.
Linux NetworX government customers include other national laboratories, including Lawrence Livermore and Lawrence Berkeley and the National Security Agency.