Plug-and-play supercomputers

When you plug an appliance into an outlet, you don€t need to know that it draws power from a large electrical grid shared by many other appliances. If such an inconspicuous system can be a reliable source of electrical power for toasters, radios and hair dryers, why not apply the same concept to computing power?

That€s what researchers at a consortium of institutions plan to do as they team with several private-sector technology heavyweights on a $53 million project funded by the National Science Foundation.

Four research centers will work primarily with IBM Corp., Qwest Communications International Inc. and Intel Corp. to build the Distributed Terascale Facility (DTF). A high-speed TeraGrid network will connect the centers and enable scientists and researchers across the country—and eventually around the world—to share resources, scan remote databases, run applications on geographically dispersed computers and view complex computer simulations in real time from multiple locations.

The network will transform the research world, said Dan Reed, director of the National Center for Supercomputing Applications (NCSA) and the National Computational Science Alliance.

"What we€re really building is a high-speed backbone to connect the four centers," Reed said. "The TeraGrid provides seamless access without people knowing where the infrastructure is and how it works. The DTF is the beginning of 21st century infra.structure for scientific computing." The four research centers that will make up the DTF are the NCSA at the University of Illinois at Urbana- Champaign; the San Diego Supercomputer Center (SDSC) at that city€s University of California campus; the Argonne National Laboratory in Argonne, Ill.; and the California Institute of Technology in Pasadena (see box).

The TeraGrid could have a dramatic impact on scientific research. SDSC director Fran Berman foresees scientists investigating the fundamental processes of the human brain, sharing simulations of new cancer drugs and ushering in an era of human genome research.

The grid will create vast pools of computing resources by connecting widely distributed supercomputers using the Internet or high-speed research networks, as well as open-source protocols from Globus, a consortium of government and educational institutions working on grid-computing research.

"Globus is kind of the glue that ties together all the computers," said Michael Nelson, director of Internet technology and strategy at IBM.

IBM Global Services will deploy clusters of IBM eServer Linux systems at the four DTF sites beginning in the third quarter of 2002. The servers will contain Intel€s Itanium next-generation microprocessor, said Robert Fogel, strategic marketing manager for Intel€s high- performance computing division.

The system will have a storage capacity of more than 600 terabytes of data. The Linux clusters will be connected via a 40 gigabits/sec Qwest network, creating a single computing system able to process 13.6 trillion calculations/sec (13.6 teraflops).

The initial TeraGrid is scheduled for completion in three years, but pieces of the network should be operational by the end of this year, Reed said.

As the grid grows to include regional and then international research facilities, it will eventually be used for commercial applications, in much the same way the Internet is, Reed said.


Supercomputing centers

Each of the Distributed Terascale Facility sites will play a unique

role in the project:

* The National Center for Supercomputing Applications will lead the

project€s computational aspects with an IBM Corp. Linux cluster powered

by Intel Corp. Itanium processors. The cluster€s peak performance will be

8 teraflops, with 240 tera.bytes of secondary storage.

* The San Diego Supercomputer Center will lead the data and knowledge management

effort. An IBM Linux cluster with Itanium processors will have a peak performance

of more than 4 teraflops and provide 225 terabytes of network disk storage.

A Sun Microsystems Inc. server will provide a gateway to grid- distributed


* The Argonne National Laboratory will deploy advanced distributed computing

software, high-resolution rendering and remote visualization capabilities

and networks, using a 1 teraflop IBM Linux cluster.

* The California Institute of Technology will provide access to large data

collections and will connect data-intensive applications to components of

the TeraGrid. Caltech will deploy a 0.4 teraflop IBM/ Itanium cluster and

will manage 86 terabytes of online storage.


Stay Connected

FCW Update

Sign up for our newsletter.

I agree to this site's Privacy Policy.