Access the Clusters
ssh with ssh key pairs to log in to the clusters, e.g.
If you have a public key and know what you're doing, upload it here and in a few minutes you should be able to log in.
For more detailed instructions, click the links below.
- Connect from macOS and Linux
- Connect from Windows
- Upload your SSH key here (only accessible on campus or through the Yale VPN)
- Troubleshoot Login
For additional information about using graphical inferfaces or connecting to the cluster from off campus, see below.
Submit Jobs to the Scheduler
We use a scheduler called "Slurm" to submit jobs to the compute nodes.
Here is a pdf from the Slurm folks giving you similar commands in several schedulers
If your jobs are taking a long time to start, see our Job Scheduling documentation to review the factors that affect the order in which jobs start.
If you are submitting a large number of similar jobs, please look at the Dead Simple Queue tool for bundling your jobs.
Transfer Files and Data
Push your bits and bytes to and from the clusters.
Each cluster has a small amount of space dedicated to home directories, meant for notes and scripts. There are project and scratch60 spaces for larger and more numerous files.
- Cluster Data Storage
- Off-Cluster Research Data Storage
- Google Drive, Team Drive and Globus Google Connector
- Setup Directories for Collaboration
To purchase additional project space above your original allocation, contact us at email@example.com.
We use modules to manage many of the software packages installed on the Yale clusters. Modules allow you to add or remove different combinations and versions of software to your environment as needed. See our module guide for more info. Below are links to recently generated lists of the available software on each cluster:
You can run
module avail to page through all available software once you log in.
We provide guides for certain software packages and languages as well.
Tips for Running on the Clusters
A few tips and tricks you might find useful.
- Preserve Sessions with
- Optimize Job I/O
- Get Info About Compute Nodes
- Measuring Memory and CPU Usage
- Using Archive (Tape) Storage
Need Additional Help?
If you have additional questions/comments, please contact the HPC team. If possible, please include the following information:
- Your netid
- Queue/partition name
- Job ID(s)
- Error messages
- Command used to submit the job(s)
- Path(s) to scripts called by the submission command
- Path(s) to output files from your jobs