Milgram is a HIPAA aligned Department of Psychology cluster intended for use on projects that may involve sensitive data. This applies to both storage and computation. If you have any questions about this policy, please contact us.
Milgram is named for Dr. Stanley Milgram, a psychologist who researched the behavioral motivations behind social awareness in individuals and obedience to authority figures. He conducted several famous experiments during his professorship at Yale University including the lost-letter experiment, the small-world experiment, and the Milgram experiment on obedience to authority figures.
If you are a first time user, please read the pertinent links from our user guide about using ssh. You will also need to make sure you have a public ssh key uploaded to the cluster.
You will then need to log in to the Yale HIPAA VPN. Use the Cisco AnyConnect VPN client to connect to
access.yale.edu/hipaa . Note that you will need to use Duo Multi-factor Authentication (MFA) to log in to the VPN just as you would the standard VPN.
You should now be able to ssh to one of your group's login nodes, all of which are listed below.
|cl1.milgram.hpc.yale.internal||Chun Lab||SSS Hall|
|cl2.milgram.hpc.yale.internal||Chun Lab||SSS Hall|
|cl3.milgram.hpc.yale.internal||Chun Lab||SSS Hall|
|hl1.milgram.hpc.yale.internal||Holmes Lab||SSS Hall|
|hl2.milgram.hpc.yale.internal||Holmes Lab||SSS Hall|
Queues and Scheduler
Milgram uses LSF for scheduling jobs. Please see our LSF documentation for more info.
Milgram uses the modules system for managing software and it's dependencies. See our documentation on modules here.
There are 12 compute nodes on Milgram. They are all the same:
|Dell R730||Intel Xeon E5-2660 v3||2.6GHz||20||128GB|
Milgram has 2TB fast disks available on each compute node for running jobs, and a larger pool of network-attached storage for archiving projects after computational work has completed. All network-attached storage systems have to queue transfers between the storage node and a client node so transfers and general I/O are as latent as the networking between these nodes. This means that jobs will run much more quickly if they use the local storage where ever possible.