System Updates Archive

  • Grace: Ongoing Networking Issues

    Monday, April 27, 2020 - 5:00pm

    We are currently experiencing networking issues that are resulting in connection failures between nodes and the filesystem. These issues may have resulted in unexpected job failures over last couple weeks, so we encourage you to check on your recent jobs. To mitigate the immediate issues, we have powered off a large number of newly added nodes in the day and mpi partitions. We are working to resolve the immediate issues and to restore the additional nodes as soon as posssible. We apologize for any inconvenience.

  • All Clusters: /SAY/archive is currently unavailable

    Friday, April 10, 2020 - 9:00am to Monday, April 13, 2020 - 2:45pm

    /SAY/archive (S@Y archive tier) is currently unavailable on all cluster. We are aware of the issue and working to resolve it as quickly as possible. Sorry for the inconvenience.

  • Scheduled maintenance on Farnam

    Monday, April 6, 2020 - 12:00am to Wednesday, April 8, 2020 - 11:59pm

    Farnam Users,

    The Farnam cluster will be unavailable due to scheduled maintenance until the end of day, Wednesday, April 8, 2020. A communication will be sent when the cluster is available. Please visit the status page on research.computing.yale.edu for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

  • COVID-19/SARS-CoV-2 Related Research

    If you are doing computational research related to COVID-19/SARS-CoV-2 on an HPC cluster and experience any long wait times for jobs to run, please let us know.

  • Grace Maintenance Update

    Thursday, February 6, 2020 - 5:00pm

    We have had to extend the Grace maintenance period due to issues with the upgraded network configuration. YCRC staff have been working continuously to investigate and resolve these issues and we are planning to make the cluster available again by midday tomorrow, Friday February 7. The Loomis storage will remain unavailable on Farnam until that time. 

  • Grace Maintenance Extended

    Wednesday, February 5, 2020 - 5:00pm

    Due to issues with the upgraded network configuration and ongoing discussions with vendors, we are continuing to work on the Grace cluster which means that we need to extend the maintenance period. A further email notification will be sent when the maintenance has been completed and the cluster is available. The Loomis storage will remain unavailable on Farnam.

  • Scheduled Maintenance on Grace

    Dear Grace and Farnam Users,

    As a reminder, scheduled maintenance will be performed on Grace beginning Monday, February 3, 2020, at 8:00 am. Maintenance is expected to be completed by the end of day, Wednesday, February 5, 2020.

    During this time, logins will be disabled and connections via Globus will be unavailable. The Loomis storage will not be available on the Farnam cluster. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the clusters are available.

    As the maintenance window approaches, the Slurm scheduler will not start any job if the job’s requested wallclock time extends past the start of the maintenance period (8:00 am on February 3, 2020). If you run squeue, such jobs will show as pending jobs with the reason “ReqNodeNotAvail”. (If your job can actually be completed in less time than you requested, you may be able to avoid this by making sure that you request the appropriate time limit using “-t” or “–time”.) Held jobs will automatically return to active status after the maintenance period, at which time they will run in normal priority order.

    During this maintenance the directory hierarchy will be “flattened”. This is part of a larger campaign to standardize the clusters and make it easier for everyone to know where data are. The current paths are of the form
     

    /gpfs/loomis/[project or scratch60 or home.grace]/[metagroup]/[group]/[netid]
    and, after the maintenance, will be
     
    /gpfs/loomis/home.grace/[netid]
    /gpfs/loomis/[project or scratch60]/[group]/[netid]
    Symlinks (shortcuts) will be left in place to make the old paths work, but it is recommended that you change any paths in your scripts to the new form as soon as possible after the maintenance. During the next maintenance window in August, the symlinks will be removed.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

  • Scheduled Maintenance on Milgram

    Milgram Users, 

    The Milgram cluster will be unavailable due to scheduled maintenance until the end of day, Thursday, December 12th, 2019.  A communication will be sent to users when the cluster is available. 

    Please visit the status page on research.computing.yale.edu for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

  • Data Center Maintenance - All Clusters.

    Dear Farnam, Grace, and Ruddle Users,

    All HPC clusters (including storage) will be unavailable starting at 4:00pm, Friday, December 6, 2019. We expect that Farnam, Grace, and Ruddle will be returned to service by the end of the day on Tuesday, December 10, 2019.

    During this time, logins to the clusters will be disabled and connections via Globus will be unavailable. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and each cluster is available.

    If you also have an account on Milgram, you will receive a second communication as the schedule for that cluster is different. Please visit the status page on research.computing.yale.edu for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

  • All Clusters Unavailable

    Monday, November 25, 2019 - 9:00am to 11:30am

    All clusters are unavailable due to an ongoing networking issue on the Science Network. We are working with ITS to resolve the problem as quickly as possible. We apologize for the inconvenience.

Pages