User Guide

Access the Clusters

We use ssh with ssh key pairs to log in to the clusters, e.g.


For more details, check the links below.

Submit Jobs to the Scheduler

We use schedulers to submit jobs to the compute nodes.

Here is a pdf from the Slurm folks giving you similar commands in several schedulers

(Note: over the course of 2017, we will be transitioning all clusters to use Slurm)

Transfer Files and Data

Push your bits and bytes to and from the clusters.

Managing Data

Each cluster has a smaller home space for notes and scripts. There are scratch and/or project spaces for high performance computation. See below for more details on your cluster.


Check out our software library using our module system. You can also feel free to install things for yourself. For Python environments, we especially like Anaconda Python.

Use the Clusters Interactively

Sometimes you need to debug or develop a new pipeline while being able to watch and modify it in real time. See below for how to get graphical apps forwarded from compute nodes to your local computer.

Tips for Running on the Clusters

A few tips and tricks you might find useful.


Guides for running certain things on our clusters that you might have questions about.

Policies and References

Need Additional Help?

If you have additional questions/comments, please contact the HPC team. If possible, please include the following information:

  • Your netid
  • Cluster
  • Queue/partition name
  • Job ID(s)
  • Error messages
  • Command used to submit the job(s)
  • Path(s) to scripts called by the submission command
  • Path(s) to output files from your jobs