Louise Whitman Farnam

The Farnam Cluster is named for Louise Whitman Farnam, the first woman to graduate from the Yale School of Medicine, class of 1916.

Logging in

If you are a first time user, make sure to read the pertinent links from our user guide about using ssh. Once you have submitted a copy of your public key to us, you should be able to ssh to farnam.hpc.yale.edu. As with the other Yale clusters, there are two login nodes; you will be randomly placed on one of them.

Partitions and Scheduler

Farnam uses the Slurm job scheduler. What we have referred to as queues for other schedulers we call partitions with slurm. The partitions available for general use are general, interactive, and scavenge. General and interactive are comprised of the same compute nodes. Scavenge allows access to unused nodes from other partitions. However, if another job submitted to a partition other than scavenge requests resources can only be satisfied by killing a scavenge job, it will be forcefully terminated. We recommend that all jobs run on scavenge be either short-lived or capable of checkpointing their state. All PI partitions otherwise require you to have permission from that principle investigator to use.

Partition m610 m620 m915 nx360 GPX XT4 3850X6 Cores Resource Limits
Per User
Walltime default/max
Host Names c01-c08 c09-c12 c24-c25 c13-c23 gpu0x bigmem0x      
interactive 40 34   94     2744 20 CPUs,
256 GB RAM
general 40 34   94     2744 100 CPUs,
640 GB RAM
scavenge all all all     5328 400 CPUs,
2500 GB RAM
gpu       2 1   48   1/2
bigmem     9     2 576 32 CPUs,
1.5 TB RAM
pi_breaker 16           128   1/14
pi_gerstein 32   2       384   1/14
pi_gerstein_gpu       3     60   1/14
pi_kleinstein     1 3     124   1/14
pi_krauthammer       1     20   1/14
pi_ma       2     40   1/14
pi_murray 24           192   1/14
pi_ohern   6   3     156   1/14
pi_sigworth       1     20   1/14
pi_sindelar   4 1 1     148   1/14
pi_strobel     1       64   1/14
pi_townsend       5     100   1/14
pi_zhao 16 17 1       464   1/14
Total 128 61 15 120 2 1 5368    


The memory you request for your job is enforced; you will encounter errors if you attempt to use more. You should use --mem-per-cpu to specify an adequate amount of RAM when submitting jobs to avoid running into problems. Note that the default value is small: 1024MB per cpu.

The bigmem partition is available to users with jobs requiring large amounts of RAM. Please contact us for access.


The gpu partition is available for users wanting to use our Nvidia GPUs. There are currently 8 K80 and 4 1080Ti GPUs available on Farnam for general use. To request specific GPUs, specify a gres gpu type of either k80 or 1080ti in your job request. e.g.: --gres=gpu:k80:1 or --gres=gpu:1080ti:1. The nodes with 1080Ti GPUs have 1.2 TB of fast SSD mounted at /tmp. For additional info on GPUs and slurm, please see our slurm page. Please note that GPU nodes belonging to PIs are available via the scavenge partition.


You can monitor your jobs and the nodes they're running on via this site.


Farnam uses the module system for managing software packages. See our documentation on modules here. If you'd like something installed that isn't available, Please contact us. Here are lists of libraries available in our default modules for Python and R:

Note: Some caution is necessary when compiling your own software on Farnam (including your your personal R libraries). Depending on what CPU SIMD instruction sets your code expects to be available, it may not run on older nodes. The newest nodes understand sse4_2, avx, avx2 vectorized instruction sets. Some older nodes only understand sse4_2.

We specify these instruction sets, cpu types, and cpu codenames as slurm features (listed below for each node type) which you can require for your job with the --constraint option. For more information on how to use constraints to control which nodes your jobs will run on, see our slurm page.

Except where noted, we have compiled software targeting the lowest commonly understood instruction set so software loaded with module load will run anywhere. If you believe your software will significantly benefit from being compiled differently please let us know.

Compute Hardware

Node Type Processor Features Cores RAM (MB)
Dell PowerEdge M610 (2) E5-620 westmere, sse4_2, E5-620 8 44000
Dell PowerEdge M620 (2) E5-2670 sandybridge, sse4_2, avx, E5-2670 16 125000
Dell PowerEdge M915 (4) AMD Opteron 6276 bulldozer, sse4_2, avx, opteron-6276 32 1535000
Lenovo nx360 M5 (2) E5-2660 v3 haswell, v3, sse4_2, avx, avx2, E5-2660_v3 20 124000
Lenovo nx360 M5 w/GPUs (2) E5-2660 v3,
(2) Nvidia K80
haswell, v3, sse4_2, avx, avx2, E5-2660_v3 20 124000
Thinkmate GPX XT4 (2) E5-2623 v4,
(4) Nvidia 1080Ti
broadwell, v4, sse4_2, avx, avx2, E5-2623_v4 8 62000
Lenovo 3850X6 (4) E7-4809 v3 haswell, v3, sse4_2, avx, avx2, E7-4809_v3 32 515000


We install commonly used genomes, built in a variety of formats, here:


Please let us know if you'd like us to install additional genomes or formats.


Farnam has 1500 TB (usable) of Lenovo GPFS parallel file storage, of which 1000 TB is for general use.

Partition Quota
/ysm-gpfs/home 125GB/user
/ysm-gpfs/project 4TB/group
/ysm-gpfs/scratch60 10TB/group

The script /ysm-gpfs/bin/my_quota.sh will give your current storage usage & limits.

The script /ysm-gpfs/bin/group_quota.sh reports on your group. Note that it is only updated once daily.


Each PI group is provided with storage space for research data on the HPC clusters. The storage is separated into three tiers: home, project, and temporary scratch.


Home storage is designed for reliability, rather than performance. Do not use this space for routine computation. Use this space to store your scripts, notes, etc. Home storage is backed up daily.


In general, project storage is intended to be the primary storage location for HPC research data in active use. Project storage is not backed up.

60-Day Scratch (scratch60)

This temporary storage should typically give you the best performance. Files older than 60 days will be deleted automatically. This space is not backed up, and you may be asked to delete files younger than 60 days old if this space fills up.

Other Storage Options

If you or your group finds these quotas don't accommodate your needs, contact us at hpc@yale.edu.

You can also mount Storage@Yale, which is a service offered by Yale ITS to University members. Note that S@Y mounted on a cluster will not be available to be mounted elsewhere. To request S@Y mounted on the clusters, fill out our S@Y Request Form.