Doing computations at scale allows a researcher to test many different variables at once, thereby shorter time to outcomes, and also provides the ability to ask larger, more complex problems(i.e., larger data sets, longer simulation time, more degrees of freedom, etc.). Researchers can take advantage of the scale of the cluster by setting up workflows to split many different tasks into large batches, which are scheduled across the cluster at the same time. Most clusters are made from commodity hardware, so do note that the performance of an individual computation may not be any faster than a local new workstation/laptop with new CPU and flash (SSD) storage; cluster computing is about scale.
Cluster Computing
Eligibility information is outlined below based on providers with offerings that are available to the entire Harvard community or a specific unit/appointment.
University-wide
Faculty of Arts and Sciences, Research Computing
Shared
FASRC Cluster Name: Cannon
Queuing System: SLURM
Access: All PI groups
Scheduling: Fairshare
Queue/Partition name: shared, general, bigmem, test
Features: 7-day maximum time limit, non-exclusive nodes, multi-node parallel
Dedicated
FASRC Cluster Name: Cannon
Queuing System: SLURM
Access: Restricted PI groups from Center/Dept./Lab
Scheduling: Fairshare
Queue/Partition name: pi_name or center name (e.g. huce)
Features: unlimited maximum time limit, non-exclusive nodes, multi-node parallel
Backfill
FASRC Cluster Name: Cannon
Queuing System: SLURM
Access: All PI groups
Scheduling: Fairshare
Queue/Partition name: serial_requeue, gpu_requeue
Features: 3-day time limit, preemption (requeue), non-exclusive nodes, 8 cores max per job
Audience
All PI groups (FASRC)
Service Provider
FASRC
Service Fee
None
Service Website
https://www.rc.fas.harvard.edu/fairshare/
Contact Information
Institute for Quantitative Social Science
Shared
IQSS Cluster Name: RCE Interactive and Batch
Queuing System: Condor
Access: Social researchers from across all Harvard schools and MIT
Scheduling: Fairshare
Queue/Partition name:
Features: Persistent, secure desktop sessions supporting Level 3 research data with free access to popular statistical applications using larger memory allocations (up to 1 TB RAM), and batch computing.
Dedicated
IQSS Cluster Name: RCE Interactive and Batch
Queuing System: Condor
Access: First priority is given to members of the group that purchased the hardware, but when not fully utilized by the group, the server remains in the general resource pool.
Scheduling: Fairshare
Queue/Partition name: iname indicating owning research group
Features: Dedicated computing with persistent, secure desktop sessions supporting Level 3 research data with free access to popular statistical applications using larger memory allocations (up to 1 TB RAM), and batch computing.
Current Dedicated Servers: Doshi-Velez, Edlabs, HarvardX, HKS, NSAPH
Backfill
IQSS Cluster Name: RCE Interactive and Batch
Queuing System: Condor
Access: Social social researchers from across all Harvard schools and MIT.
Scheduling: Fairshare
Features: Same as shared resources except pre-emption may occur to accommodate dedicated users
Audience
Social researchers from across all Harvard schools and MIT. Priority is given to server owners; social researchers from across all Harvard schools and MIT.
Service Provider
IQSS
Service Fee
Cost of server plus nominal hosting fee (for dedicated cluster resources)
Service Website
https://www.iq.harvard.edu/research-computing
Contact Information
Unit/Appointment-specific
Harvard Business School
Shared
HBS Cluster Name: HBS Grid
Queuing System: LSF
Access: All groups
Scheduling: Fairshare
Queue/Partition Name: shared, general, bigmem, test
Features: 7-day maximum time limit, non-exclusive noes, multi-node parallel
Cost: None
Audience
HBS Faculty, HBS Doctoral Students, HBS MBA Students, HBS Staff.
Service Provider
HBS
Service Fee
None
Service Website
Contact Information
Harvard Medical School
Shared
Cluster Name: O2
Queuing System: Slurm
The O2 cluster features CPU and GPU computing, with access to Scratch and Tier 1 storage (home and group folders).
Audience
HMS Account (aka eCommons Account) holders. Note: some resources may be available only to HMS researchers from labs whose PIs have a primary or secondary faculty appointment in an HMS Quad department.
HMS, HSDM, HSPH, and affiliate hospital users can activate their HMS account prior to submitting an O2 Account Request. HMS Faculty can also sponsor accounts for guests.
Service Provider
HMS
Service Fee
None
Service Website
https://it.hms.harvard.edu/our-services/research-computing
Contact Information
Contact Amir Karger at rchelp@hms.harvard.edu