The Farnam Cluster is named for Louise Whitman Farnam, the first woman to graduate from the Yale School of Medicine, class of 1916.
If you are a first time user, make sure to read the pertinent links from our user guide about using ssh. Once you have submitted a copy of your public key to us, you should be able to ssh to
farnam.hpc.yale.edu. As with the other Yale clusters, there are two login nodes; you will be randomly placed on one of them.
Partitions and Scheduler
Farnam uses the Slurm job scheduler. What we have referred to as queues for other schedulers we call partitions with slurm. The partitions available for general use are general, interactive, and scavenge. General and interactive are comprised of the same compute nodes. Scavenge allows access to unused nodes from other partitions. However, if another job submitted to a partition other than scavenge requests resources can only be satisfied by killing a scavenge job, it will be forcefully terminated. We recommend that all jobs run on scavenge be either short-lived or capable of checkpointing their state. All PI partitions otherwise require you to have permission from that principle investigator to use.
|Partition||m610||m620||m915||nx360||GPX XT4||3850X6||Cores||Resource Limits
256 GB RAM
640 GB RAM
2500 GB RAM
1.5 TB RAM
The memory you request for your job is enforced; you will encounter errors if you attempt to use more. You should use
--mem-per-cpu to specify an adequate amount of RAM when submitting jobs to avoid running into problems. Note that the default value is small: 1024MB per cpu.
The bigmem partition is available to users with jobs requiring large amounts of RAM. Please contact us for access.
The gpu partition is available for users wanting to use our Nvidia GPUs. There are currently 8 K80 and 4 1080Ti GPUs available on Farnam for general use. To request specific GPUs, specify a gres gpu type of either k80 or 1080ti in your job request. e.g.:
--gres=gpu:1080ti:1. The nodes with 1080Ti GPUs have 1.2 TB of fast SSD mounted at
/tmp. For additional info on GPUs and slurm, please see our slurm page. Please note that GPU nodes belonging to PIs are available via the scavenge partition.
You can monitor your jobs and the nodes they're running on via this site.
Farnam uses the module system for managing software packages. See our documentation on modules here. If you'd like something installed that isn't available, Please contact us. Here are lists of libraries available in our default modules for Python and R:
Note: Some caution is necessary when compiling your own software on Farnam (including your your personal R libraries). Depending on what CPU SIMD instruction sets your code expects to be available, it may not run on older nodes. The newest nodes understand sse4_2, avx, avx2 vectorized instruction sets. Some older nodes only understand sse4_2.
We specify these instruction sets, cpu types, and cpu codenames as slurm features (listed below for each node type) which you can require for your job with the
--constraint option. For more information on how to use constraints to control which nodes your jobs will run on, see our slurm page.
Except where noted, we have compiled software targeting the lowest commonly understood instruction set so software loaded with
module load will run anywhere. If you believe your software will significantly benefit from being compiled differently please let us know.
|Node Type||Processor||Features||Cores||RAM (MB)|
|Dell PowerEdge M610||(2) E5-620||westmere, sse4_2, E5-620||8||44000|
|Dell PowerEdge M620||(2) E5-2670||sandybridge, sse4_2, avx, E5-2670||16||125000|
|Dell PowerEdge M915||(4) AMD Opteron 6276||bulldozer, sse4_2, avx, opteron-6276||32||1535000||Lenovo nx360 M5||(2) E5-2660 v3||haswell, v3, sse4_2, avx, avx2, E5-2660_v3||20||124000|
|Lenovo nx360 M5 w/GPUs||(2) E5-2660 v3,
(2) Nvidia K80
|haswell, v3, sse4_2, avx, avx2, E5-2660_v3||20||124000|
|Thinkmate GPX XT4||(2) E5-2623 v4,
(4) Nvidia 1080Ti
|broadwell, v4, sse4_2, avx, avx2, E5-2623_v4||8||62000|
|Lenovo 3850X6||(4) E7-4809 v3||haswell, v3, sse4_2, avx, avx2, E7-4809_v3||32||515000|
We install commonly used genomes, built in a variety of formats, here:
Please let us know if you'd like us to install additional genomes or formats.
Farnam has 1500 TB (usable) of Lenovo GPFS parallel file storage, of which 1000 TB is for general use.
/ysm-gpfs/bin/my_quota.sh will give your current storage usage & limits.
/ysm-gpfs/bin/group_quota.sh reports on your group. Note that it is only updated once daily.
Each PI group is provided with storage space for research data on the HPC clusters. The storage is separated into three tiers: home, project, and temporary scratch.
Home storage is designed for reliability, rather than performance. Do not use this space for routine computation. Use this space to store your scripts, notes, etc. Home storage is backed up daily.
In general, project storage is intended to be the primary storage location for HPC research data in active use. Project storage is not backed up.
60-Day Scratch (
This temporary storage should typically give you the best performance. Files older than 60 days will be deleted automatically. This space is not backed up, and you may be asked to delete files younger than 60 days old if this space fills up.
Other Storage Options
If you or your group finds these quotas don't accommodate your needs, contact us at firstname.lastname@example.org.
You can also mount Storage@Yale, which is a service offered by Yale ITS to University members. Note that S@Y mounted on a cluster will not be available to be mounted elsewhere. To request S@Y mounted on the clusters, fill out our S@Y Request Form.