Clusters

Faculty of Arts and Sciences

Grace

Login Node: grace.hpc.yale.edu

The Grace cluster primarily serves researchers within the Faculty of Arts and Sciences. Grace consists of over 200 nodes and 4700 cores. There is roughly 1.4 PB of usable high performance storage. Grace uses the LSF scheduler.

Omega

Login Node: omega.hpc.yale.edu

The Omega cluster primarily serves researchers within the Faculty of Arts and Sciences (FAS). Omega consists of over 1000 compute nodes with with more than 8500 cores and roughly 1.2 PB of usable high performance storage. Omega uses torque/moab for job scheduling

Milgram

Login Nodes: see cluster page for details

The Milgram cluster is a HIPAA aligned cluster intended for use on projects that may involve secure patient records. This applies to both storage and computation. Milgram consists of 12 nodes and 240 cores. Milgram uses the LSF scheduler

Medical School

Farnam

Login Node: farnam.hpc.yale.edu

Web jobs/nodes monitoring tool

The Farnam cluster primarily serves researchers within the Yale Medical School. Farnam initially consisted of 115 20 core compute nodes, 2 GPU nodes, and 2 large memory nodes. It also includes many nodes migrated from the now retired Louise cluster. Storage consists of 1.5PB of high performance storage. Farnam uses the slurm scheduler.

Ruddle

Login Node: ruddle.hpc.yale.edu

Web jobs/nodes monitoring tool

The Ruddle cluster is intended for use only on projects related to the Yale Center for Genome Analysis (YCGA). This applies to both storage and computation. Ruddle consists of over 150 nodes and over 3000 cores. Ruddle has roughly 2 PB of usable high performance storage. Follow the link for detailed information about BulldogN including two factor authentication.