Colonial One High Performance Computing Initiative

Colonial One

For research needs that use high performance computing for data analysis, GW recently acquired and is in the process of implementing a new shared high performance computing cluster named Colonial One.  Colonial One will be implemented and managed by professional IT staff in the University’s Division of IT Technology, Architecture, and Research Services Group (TARS) with University-sponsored computational staff housed in the Computational Biology Institute and Columbian College of Arts and Sciences.  Access to Colonial One will be open to the University community, with priority access configured for schools and faculty members that contribute to the cluster’s core infrastructure and additional compute nodes.  The initial implementation of Colonial One represents a partnership between DIT's TARS group and OTS, responding to current and developing faculty research needs in the College's various academic disciplines.

The Colonial One Wiki is available for those who want access.

The following highlights the facility, compute capacity, and storage capacity of Colonial One:


Facility

Located on the Virginia Science and Technology Campus in one of GW’s two enterprise-class data centers, Colonial One will be housed in an optimal facility featuring:

  • Professional IT Management by the University’s central Division of IT, including 24-hour on-premise and remote environment monitoring with hourly staff walkthroughs.
  • Redundant power distribution to include UPS (battery) and generator backup.
  • Redundant cooling systems utilizing a dedicated chilled water plant and a glycol refrigeration system.
  • Direct network connectivity to the University’s robust 22 Gbps intra-campus fiber optic network.  A major infrastructure upgrade later this year will increase this inter-campus capacity to 100 Gbps.

Compute and Interconnect Capacity

Colonial One’s initial compute capacity features a total of 1,408 CPU cores and 159,744 CUDA cores in the following compute node configurations:

  • 64 standard CPU nodes featuring dual Intel Xeon E5-2670 2.6GHz 8-core processors with varying ranges of RAM capacity (64GB, 128GB, and 256GB nodes) and dual on-board solid state hard drives.
  • 32 GPU nodes featuring dual Intel Xeon E-2620 2.0GHz 6-core processors with dual nVidia Kepler K20 GPUs, 128GB of RAM, and dual on-board solid state hard drives.
  • FDR InfiniBand network interconnect featuring 54.5 Gbps total throughput, with 2:1 oversubscription per compute node.

Storage Systems

The Colonial One cluster will utilize a primary storage system and a high-speed scratch storage system connected to the InfiniBand interconnect network with the following specifications:

  • Dell NSS Primary storage with 144TB of usable capacity.
  • Dell HSS High-speed scratch storage with 300TB of usable capacity.

George Washington University Powers Research with Dell HPC Solution

Dell Case Study on their partnership with the George Washington University.

Colonial One High Performance Computing Initiative

For research needs that use high performance computing for data analysis, GW recently acquired and is in the process of implementing a new shared high performance computing cluster named Colonial One.