User Manual

Brand New?

Give the getting started page a read, or checkout our commands cheatsheet.

Access the Clusters

We use ssh with ssh key pairs to log in to the clusters, e.g.

ssh netid@clustername.hpc.yale.edu

If you have a public key and know what you're doing, upload it here and in a few minutes you should be able to log in.

For more detailed instructions, click the links below.

For additional information about using graphical inferfaces or connecting to the cluster from off campus, see below.

Submit Jobs to the Scheduler

We use a scheduler called "Slurm" to submit jobs to the compute nodes.

Here is a pdf from the Slurm folks giving you similar commands in several schedulers

If your jobs are taking a long time to start, see our Job Scheduling documentation to review the factors that affect the order in which jobs start.

If you are submitting a large number of similar jobs, please look at the Dead Simple Queue tool for bundling your jobs.

Example Submission and Parallel R, Matlab and Python Scripts

Transfer Files and Data

Push your bits and bytes to and from the clusters.

Managing Data

Each cluster has a small amount of space dedicated to home directories, meant for notes and scripts. There are project and scratch60 spaces for larger and more numerous files.

To purchase additional project space above your original allocation, contact us at hpc@yale.edu.

Software

We use modules to manage many of the software packages installed on the Yale clusters. Modules allow you to add or remove different combinations and versions of software to your environment as needed. See our module guide for more info. Below are links to recently generated lists of the available software on each cluster:

You can run module avail to page through all available software once you log in.

You should also feel free to install things for yourself. For Python environments, we especially like Anaconda Python. You can also bring your own container with Singularity.

We provide guides for certain software packages and languages as well.

Tips for Running on the Clusters

A few tips and tricks you might find useful.

Need Additional Help?

If you have additional questions/comments, please contact the HPC team. If possible, please include the following information:

  • Your netid
  • Cluster
  • Queue/partition name
  • Job ID(s)
  • Error messages
  • Command used to submit the job(s)
  • Path(s) to scripts called by the submission command
  • Path(s) to output files from your jobs