The HPC² facilities include two buildings in Starkville, MS, the Portera HPC Building and the CAVS Building, both within the Thad Cochran Research, Technology, & Economic Development Park adjacent to the Mississippi State University campus, and the STC building at the NASA John C. Stennis Space Center (SSC) near Bay St. Louis, MS. The Portera HPC Building is a 71,000 square foot facility designed in an open manner to facilitate multi-disciplinary interactions and houses the organization's primary data center. The CAVS building is a 57,000 square foot facility consisting of numerous office suites, experimental laboratories housing an extensive array of equipment in support of materials, advanced power systems, and human factors research activities, as well as a small data center. The STC building at the NASA SSC is a 38,000 square foot facility consisting of office space, classroom space, and a data center. These buildings house state-of-the-art high performance computer clusters (i.e., supercomputers), associated support instrumentation, a dedicated computing staff led by Mr. Trey Breckenridge, and a dedicated business staff led by Ms. Brandy Akers. The HPC equipment and operations team serves a coalition of select institutes and centers (traditionally called "member centers"). The HPC² member centers can vary greatly in their research/educational/service goals, but all are united by research excellence, a need for state-of-the-art high performance computing technologies/infrastructure, and histories of research and fiscal success both independently and as part of large multi-disciplinary teams. The HPC² has been in operation for more than 20 years and has consistently been amongst the best managed and most powerful supercomputing sites in academia (and arguably in any sector). It is the job of the member centers to help support and grow the HPC²'s infrastructure while the HPC²'s human and computational resources are leveraged by the member centers to increase MS State's scientific, educational, and economic footprint.
The HPC2 provides an advanced computing infrastructure in support of research and education activities of the collaboratory's member centers and institutes. This infrastructure includes high performance computing (HPC) systems, a fully-immersive 3-D scientific visualization system, high performance storage systems, a large capacity archival system, high-bandwidth networking systems, and an extensive number of desktop workstations. The primary computational systems consist of a 593 TeraFLOPS cluster with 4800 Intel Ivy Bridge processor cores and 28,800 Intel Xeon Phi cores, 72 terabytes of main memory, 4 terabytes of Xeon Phi memory, and an FDR InfiniBand interconnect; a 34 TeraFLOPS 3072-core Intel Westmere cluster with 6 terabytes of RAM and a quad data-rate InfiniBand interconnect; a 10 TeraFLOPS 2048-core AMD Opteron cluster with 4 terabytes of RAM; and a small Cray XT5 for applications development. Data storage capabilities include 8 petabytes of high performance RAID-enabled disk systems including a large parallel file system, and a 9 petabyte near-line storage/archival system. The HPC² advanced scientific visualization needs are met by an immersive CAVE-like virtual reality environment, dubbed the Virtual Environment for Real Time EXploration or VERTEX. The networking infrastructure backbone consists primarily of a 10-Gigabit Ethernet network interconnecting the organization's primary computing and storage systems, as well as an extensive number of high performance edge switches providing connectivity to the organization's more-than 500 high-end desktops and laptops. This network infrastructure supports full redundancy at the core and allows for aggregated connections to support high-bandwidth activities. Each of the three facilities obtains wide area (external) network connectivity to the commodity Internet and Internet2 through dual 10 Gigabit/sec connections into the Mississippi Optical Network (MISSION), a regional optical network supporting research activities within the state. The two MISSION network connections are via geographically diverse paths across the state, providing for high-availability and fault tolerant communication channels, and access to the Internet2 connector site in Jackson, Mississippi which supports a potential capacity of more than 8 terabits per second. These robust wide area network connections give the HPC² researchers the ability to share large sets of data with collaborators across the country and around the globe.
Member centers must have more than a demonstrated need for considerable computational power. They must have a history of sustained research success as demonstrated by scholarly works and funded grants/contracts, and a dedication to contributing to the success of the HPC². HPC² member centers have the following responsibilities and privileges:
The IGBB became an HPC² member center in 2011. Since that time, several institutes have joined the HPC² team. Current HPC² members are as follows (in alphabetical order):
When the IGBB joined the HPC² in 2011, ICRES (at that time known as the Center for Advanced Vehicular Systems or CAVS), GRI, NGI, and CCS were already member centers. DASI, CCI, and ASSURE have joined the HPC² team in the years since the IGBB became a member center. The IGBB is currently 5th of 8 in terms of total expenditures; ICRES accounts for ca. half of all member center expenditures (Figure 1).
Table 1.Each HPC² member center's contribution to total expenditures [average of fiscal year (FY) 2015 and FY2016].
Although by no means the largest HPC² member center, it is the only center whose primary focus is on biomolecular research. Moreover, its funding sources are more diversified than some of the member centers providing it with relative financial stability.