Access the Clusters
ssh with ssh key pairs to log in to the clusters, e.g.
For more details, check the links below.
- Connect from macOS and Linux
- Connect from Windows
- Sample SSH Configuration
- Upload your SSH key here (only accessible on campus or through the Yale VPN)
- Off Campus Access to the Clusters
- Troubleshoot Login
Submit Jobs to the Scheduler
We use schedulers to submit jobs to the compute nodes.
Here is a pdf from the Slurm folks giving you similar commands in several schedulers
(Note: over the course of 2017, we will be transitioning all clusters to use Slurm)
Push your bits and bytes to and from the clusters.
Each cluster has a smaller home space for notes and scripts. There are scratch and/or project spaces for high performance computation. See below for more details on your cluster.
Use the Clusters Interactively
Sometimes you need to debug or develop a new pipeline while being able to watch and modify it in real time. See below for how to get graphical apps forwarded from compute nodes to your local computer.
Tips for Running on the Clusters
A few tips and tricks you might find useful.
- Preserve Sessions with
- Optimize Job I/O
- Get Info About Compute Nodes
- Troubleshoot a Running Job
- Monitor Memory Usage
Guides for running certain things on our clusters that you might have questions about.
- Dead Simple Queue (Slurm only)
- Gaussian guide
- GPUs and CUDA guide
- Using GPUs with Python Deep Learning on Farnam
- MATLAB guide
- R guide
Policies and References
Need Additional Help?
If you have additional questions/comments, please contact the HPC team. If possible, please include the following information:
- Your netid
- Queue/partition name
- Job ID(s)
- Error messages
- Command used to submit the job(s)
- Path(s) to scripts called by the submission command
- Path(s) to output files from your jobs