Purdue Clusters¶
The Rosen Center for Advanced Computing (Purdue's supercomputing center) maintains a few clusters that serve different purposes. The first four are what are considered "Purdue Community Clusters", and are open to be purchased by Purdue staff and faculty. The clusters listed after the first four are speciality clusters and have atypical methods of getting access to them.
Bell¶
Bell is a Community Cluster optimized for communities running traditional, tightly-coupled science and engineering applications. Bell was built through a partnership with Dell and AMD over the summer of 2020. Bell consists of Dell compute nodes with two 64-core AMD Epyc 7662 "Rome" processors (128 cores per node) and 256 GB of memory. All nodes have 100 Gbps HDR Infiniband interconnect and a 6-year warranty. Bell will be retired after the warranty ends, in the fall of 2026.
Negishi¶
Negishi is a Community Cluster optimized for communities running traditional, tightly-coupled science and engineering applications. Negishi was built through a partnership with Dell and AMD over the summer of 2022. Negishi consists of Dell compute nodes with two 64-core AMD Epyc "Milan" processors (128 cores per node) and 256 GB of memory. All nodes have 100 Gbps HDR Infiniband interconnect and a 6-year warranty. Negishi will be retired after the warranty ends, in the fall of 2028. Negishi is the successor of Bell. Negishi is great for typical, multi-processing tasks.
Gautschi¶
Gautschi is a Community Cluster optimized for communities running traditional, tightly-coupled science and engineering applications. The Gautschi Community Cluster is equipped with both CPU and GPU compute nodes, each designed for specific computational tasks. The cluster includes Dell CPU compute nodes that feature dual 96-core AMD Epyc "Genoa" processors, providing 192 cores per node and 384 GB of memory. GPU compute nodes come with two 56-core Intel Xeon Platinum 8480+ processors, a total of 1031 GB of CPU memory, and eight NVIDIA H100 GPUs, each boasting 80 GB of memory. All compute nodes have 400 Gbps NDR Infiniband interconnect and service through 2030. Gautschi, since it has both CPU and GPU capabilities, is great for AI work.
Gilbreth¶
Gilbreth is a Community Cluster optimized for communities running GPU intensive applications such as machine learning. Gilbreth consists of Dell compute nodes with Intel Xeon processors and Nvidia Tesla GPUs. Gilbreth is made up of many different node types that all have different GPUs associated with them, ranging from Nvidia V100s to A100s.
Scholar¶
Scholar is a small computer cluster, suitable for classroom learning about high performance computing (HPC). It consists of 6 interactive login servers and 16 batch worker nodes. Scholar is not designed to handle heavy research workloads and is purely meant to be a teaching resource.
Anvil¶
Anvil, which is funded by a $10 million award from the National Science Foundation, significantly increases the capacity available to the NSF's Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which serves tens of thousands of researchers across the U.S.. Anvil enters production in 2021 and serves researchers for five years. Additional funding from the NSF supports Anvil's operations and user support. Anvil is built in partnership with Dell and AMD and consists of 1,000 nodes with two 64-core AMD Epyc "Milan" processors each and will deliver over 1 billion CPU core hours to ACCESS each year, with a peak performance of 5.3 petaflops. Anvil's nodes are interconnected with 100 Gbps Mellanox HDR InfiniBand. The supercomputer ecosystem also includes 32 large memory nodes, each with 1 TB of RAM, and 16 nodes each with four NVIDIA A100 Tensor Core GPUs providing 1.5 PF of single-precision performance to support machine learning and artificial intelligence applications. Access to Anvil is granted by the NSF's ACCESS program and not by RCAC.
Clusters for Controlled Research¶
Purdue also operates two clusters for controlled research: Rossmann and Weber. Rossmann is built to handle health data with higher levels of access restrictions, such as patient data. Weber is designed to deal with export controlled data. Access to these two clusters is on a case-by-case basis and usually requires a review of what and why they are needed.
Next section:How to access the Clusters