Farnam

About

Louise Whitman Farnam

The Farnam Cluster is named for Louise Whitman Farnam, the first woman to graduate from the Yale School of Medicine, class of 1916.

Logging in

If you are a first time user, make sure to read the pertinent links from our user guide about using ssh. Once you have submitted a copy of your public key to us, you should be able to ssh to farnam.hpc.yale.edu. You will be prompted to authenticate with Duo, Yale's multifactor authentication service. As with the other Yale clusters, there are two login nodes; you will be randomly placed on one of them.

Partitions and Scheduler

Farnam uses the Slurm job scheduler. What we have referred to as queues for other schedulers we call partitions here. The partitions available for general use are general, scavenge, and interactive. PI partitions require you to have permission from that principle investigator .

The scavenge partition allows access to unused nodes from other partitions. However, jobs running here will be forcefully aborted if other partitions require nodes and the request can only be satisfied by killing a scavenge job. We recommend that all jobs run on scavenge be either short-lived or capable of checkpointing their state.

Partition m610 m620 m915 nx360 3850X6 Cores Resource Limits
Per User
Walltime default/max
Host Names c01-c08 c09-c12 c24-c25 c13-c23 bigmem0x      
interactive       5   100 4CPUs 1/1
general 40 34   94   2744 100CPUs,
640GB RAM
7/30
scavenge all all all   5328 400CPUs,
2500GB RAM
7/30
gpu       2   40   1/7
bigmem     9   2 576 1 node,
exclusive
1/7
pi_breaker 16         128   14/14
pi_gerstein 32   2     384   14/14
pi_gerstein_gpu       3   60   14/14
pi_kleinstein     1 3   124   14/14
pi_krauthammer       1   20   14/14
pi_ma       2   40   14/14
pi_murray 24         192   14/14
pi_ohern   6   3   156   14/14
pi_sigworth       1   20   14/14
pi_sindelar   4 1 1   148   14/14
pi_strobel     1     64   14/14
pi_townsend       5   100   14/14
pi_zhao 16 17 1     464   14/14
Total 128 61 15 120 2 5360    

The memory your request for job is enforced; you will encounter errors if you attempt to use more. You should use --mem-per-cpu when submitting jobs to avoid running into problems. The default value is 1024MB/cpu.

You can specify a particular node type (e.g. nx360, m610, etc), by using -C type.  

The bigmem partition is available to users with jobs requiring large amounts of RAM. Please contact us for access.

The gpu partition is available for users wanting to use our Nvidia GPUs. When allocating nodes, you must also request 1 or more gpus, using --gres=gpu:N. For example:

srun --pty --x11 -p gpu -c 10 -t 24:00:00 \
     --gres=gpu:2 --gres-flags=enforce-binding bash

Note that gpu nodes belonging to PIs are also available via the scavenge partition

You can monitor your jobs and the nodes they're running on via this site.

Software

Farnam uses the module system for managing software packages. See our documentation on modules here. If you'd like something installed that isn't available, Please contact us. All new software on farnam will be available as a module. Here are lists of libraries available in our default modules for Python and R:

Note: Some caution is necessary when compiling your own software on Farnam (even your personal R libraries). Depending on what CPU instruction sets your code expects to be available (e.g. SSE, AVX), your code may not run on older nodes. To remedy this you have two options:

  1. Make sure you compile and run your codes only on the newest nodes. This may give you best performance. To ensure your slurm jubs run on the new nodes, add them as a contraint when you submit your job with --constraint=nx360
  2. To make sure your compiled code will run on all the cpu architectures available to you, use an older node to compile on with --constraint=m610 . This is usually how software is compiled for the modules available on Farnam.

Compute Hardware

Node Type Processor Speed Cores RAM (GB)
Lenovo nx360 M5 (2) E5-2660 v3 2.6GHz 20 128
Lenovo nx360 M5 w/GPUs (2) E5-2660 v3, (2) Nvidia K80 2.6GHz 20 128
Lenovo x3850 x6 (4) E7-4809 v3 2.133GHz 32 1500
Dell PowerEdge M610 (2) E5620 2.4GHz 8 48
Dell PowerEdge M620 (2) E5-2670 2.6GHz 16 128
Dell PowerEdge M915 (4) AMD Opteron 6276 2.3Ghz 32 512

Genomes

We install commonly used genomes, built in a variety of formats, here:

/ysm-gpfs/datasets/genomes

Please let us know if you'd like us to install additional genomes or formats.

Storage

Farnam has 1500 TB (usable) of Lenovo GPFS parallel file storage, of which 1000 TB is for general use.

Partition Quota
/ysm-gpfs/home 125GB/user
/ysm-gpfs/project 4TB/group
/ysm-gpfs/scratch60 10TB/group

The script /ysm-gpfs/bin/my_quota.sh will give your current storage usage & limits.

The script /ysm-gpfs/bin/group_quota.sh reports on your group. Note that it is only updated once daily.

Storage


Each PI group is provided with storage space for research data on the HPC clusters. The storage is separated into three tiers: home, project, and temporary scratch.

Home

Home storage is designed for reliability, rather than performance. Do not use this space for routine computation. Use this space to store your scripts, notes, etc. Home storage is backed up daily.

Project

In general, project storage is intended to be the primary storage location for HPC research data in active use. Project storage is not backed up.

60-Day Scratch (scratch60)

This temporary storage should typically give you the best performance. Files older than 60 days will be deleted automatically. This space is not backed up, and you may be asked to delete files younger than 60 days old if this space fills up.

Other Storage Options

If you or your group finds these quotas don't accommodate your needs, contact us at hpc@yale.edu.

You can also mount Storage@Yale, which is a service offered by Yale ITS to University members. Note that S@Y mounted on a cluster will not be available to be mounted elsewhere. To request S@Y mounted on the clusters, fill out our S@Y Request Form.