Welcome to RCAC Documentation¶
Announcement
This is a demo site. You are visiting a demo site designed for testing purposes only. Contents on this website may not reflect production RCAC resources. Check rcac.purdue.edu for official information.
New to RCAC?¶
Follow these steps to get up and running on RCAC clusters.
-
Get an Account
Request access to RCAC computing resources through your Purdue career account or an ACCESS account.
-
Connect to a RCAC Cluster
Learn how to log in via SSH, set up your environment, and access the cluster for the first time.
-
Transfer Your Data
Move files to and from the cluster using SCP, SFTP, Globus, or the research data depot.
-
Submit Your First Job
Write a Slurm batch script, submit it to the scheduler, and monitor your job's progress.
HPC User Guides¶
-
Anvil
NSF-funded capacity cluster for the national research community. Features AMD EPYC Milan CPUs, NVIDIA A100 GPUs, and large-memory nodes. Available through ACCESS allocations.
128 cores/node | 256 GB RAM | A100 40GB GPUs
-
Gautschi
Purdue's community cluster for faculty and research groups. Powered by AMD EPYC Genoa CPUs and NVIDIA H100 GPUs. Access through the community cluster purchase program.
192 cores/node | 384 GB RAM | H100 80GB GPUs
-
Bell
Community Cluster optimized for communities running traditional, tightly-coupled science and engineering applications. Built through a partnership with Dell and AMD, Bell consists of compute nodes with two 64-core AMD EPYC "Rome" processors and 256 GB of memory.
128 cores/node | 256 GB RAM | 100 Gbps HDR Infiniband
-
Negishi
Community Cluster optimized for communities running traditional, tightly-coupled science and engineering applications. Built through a partnership with Dell and AMD, Negishi consists of compute nodes with two 64-core AMD EPYC "Milan" processors and 256 GB of memory.
128 cores/node | 256 GB RAM | 100 Gbps HDR Infiniband
-
Gilbreth
Community Cluster optimized for communities running GPU intensive applications such as machine learning. Consists of Dell compute nodes with Intel Xeon processors and Nvidia Tesla GPUs.
-
Scholar
A small cluster suitable for classroom learning about high performance computing. Consists of 6 interactive login servers and 16 batch worker nodes, accessible as a typical cluster with a job scheduler or as an interactive resource with a desktop-like environment.
RCAC Resources¶
-
RCAC Blogs
Dive into insights from RCAC staff covering best practices, new features, and tips for getting the most out of our computing resources.
-
Workshops & Tutorials
Hands-on training materials from RCAC workshops, covering topics from introductory Linux to advanced parallel computing and GPU programming.
-
Software Catalog
Browse the complete catalog of software installed across RCAC clusters, including versions, module names, and usage instructions.
-
Datasets
Access curated research datasets hosted on RCAC systems, including genomics references, machine learning benchmarks, and domain-specific collections.
Need Help?¶
-
Email Support
Reach the RCAC help desk for account issues, software requests, and technical questions.
-
Community Discord
Join the Purdue research computing community to chat with peers and staff in real time.
-
GitHub
Report documentation issues, suggest improvements, or contribute to RCAC open-source projects.
-
Contact Details
Find office hours, phone numbers, and other ways to connect with the RCAC support team.