Energy lab readies for Solaris 10
A global collaboration grid at an Energy Department laboratory will lean on hardware from Sun Microsystems Inc. and the company’s new Solaris 10 operating system to allow data sharing across multiple levels of security and confidentiality.
“We’re just getting under way,” said Eric Greenwade, chief IT architect at the Idaho National Engineering and Environmental Laboratory in Idaho Falls, which recently purchased a massive Sun cluster to tackle large-scale design projects that will pull in experts from around the world. By 2030, Energy wants safer, lower-waste Generation IV reactors that can compete economically with traditional electricity sources. The architecture will help collaborators address the department’s challenges while protecting the intellectual property of those involved.
As you might expect, INEEL already houses the computing resources to support its varied research projects. High-end compute power is available through a PowerMac G4 cluster, a half-dozen Linux clusters and three Cray SV1 supercomputers.
But to host the Generation IV collaboration, the 5,000-employee lab invested $1.97 million in a grid-computing cluster of 230 Sun Fire V20z servers with Advanced Micro Devices’ Opteron processors and 12T of Sun StorEdge 6320 storage.
The Opteron servers drew Greenwade’s interest “for a large part of our problem space,” he said. “The Opteron is a very good processor for our problems.”
But one of the most significant attractions of the Sun solution was the logical partitioning possible in the forthcoming Sun Solaris 10 operating system. A server running a single instance of Solaris 10 could host hundreds of independent partitions, called containers, with varied processor, memory and bandwidth requirements.
Solaris 10, due out this month, has long been associated with proprietary, RISC-based computing platforms, al- though Sun has made an x86 version for a while. The new version, which the company says will include better support for 32-bit Intel Xeon processors and 64-bit AMD Opteron processors, was largely rewritten to incorporate high-availability features such as predictive self-healing.
Combined with Opteron’s fast performance and relatively low cost, the Solaris 10 partitioning “led us to choose the cluster we got,” Greenwade said. “The hardware-software infrastructure has to be cross-disciplinary and support multiple dynamic partners with very easy data access but also with very strong data protection,” Greenwade said.Agreements on sharing
So far there are about a dozen collaborators working on each of the six current Generation IV designs. “Some of the work overlaps,” Greenwade said. “For each design type, they have structured agreements about who will share what. We don’t want them all doing similar work.”
To that end, the researchers must all work with the same software versions and the same numeric model of nuclear fuel as their input. “There’s a single input file, without any copies,” Greenwade said, “whether the researchers are in Idaho or Japan or South Africa.”
Most of the researchers use the high-speed connectivity of the Energy Sciences Network, operated by Lawrence Berkeley National Laboratory in California. ESnet is a national research network with a current maximum backbone speed of 10 Gbps.
Although the collaborative data appears on the researchers’ local computers, they cannot store any of it locally. They must use the containers and security features on INEEL’s grid.
Even though all the researchers can use the lab’s new high-performance grid, they don’t have to, said lab CIO Dan Wickard. “We’re not promising exclusivity of the lab computers, we’re promising rigor and standardization.”
Connect with the GCN staff on Twitter @GCNtech.