System Updates Archive

  • 7/22/2022: RESOLVED: Power outage at West Campus impacted Grace and Milgram clusters.

    A large portion of Grace and all nodes on Milgram went offline on the morning of Friday 7/22/2022 due to a power issue in the data center. The system has been restored, however, many jobs died due to the issue. Please check on the status of any jobs that may have been running at that time.

  • Grace and Milgram Clusters Unavailable

    Friday, July 22, 2022 - 9:00am

    A power issue at the West Campus data center has impacted the Grace and Milgram clusters.  Facilities, ITS, and YCRC are working to restore service.

  • COMPLETED: Planned Maintenance: Cluster access interruption

    Thursday, July 14, 2022 - 7:30am

    Yale ITS will perform maintenance that will interrupt the connection between Yale’s Science Network (where the HPC clusters are hosted) and the Campus Network on Thursday, July 14, 2022, at 7:30am.  The interruption should last for about 30 seconds.  During this brief interruption, new attempts to login to the clusters will fail, and existing connections may be dropped.  Existing interactive jobs may be impacted. 

    Before the maintenance, please save your work, quit interactive applications, exit any interactive jobs, and logoff of the clusters. Batch jobs should not be impacted.

  • Milgram Scheduled Maintenance

    Dear Milgram Users,

    We will perform scheduled maintenance on Milgram starting on Tuesday, June 7, 2022, at 8:00 am. Maintenance is expected to be completed by the end of day, Thursday, June 9, 2022.

    During this time, logins to the cluster will be disabled. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the cluster is available.

    As the maintenance window approaches, the Slurm scheduler will not start any job if the job’s requested wallclock time extends past the start of the maintenance period (8:00 am on June 7, 2022). You can run the command “htnm” (short for “hours_to_next_maintenance”) to get the number of hours until the next maintenance period, which can aid in submitting jobs that will run before maintenance begins. If you run squeue, such jobs will show as pending jobs with the reason “ReqNodeNotAvail.” (If your job can actually be completed in less time than you requested, you may be able to avoid this by making sure that you request the appropriate time limit using “-t” or “–time”.) Held jobs will automatically return to active status after the maintenance period, at which time they will run in normal priority order. All running jobs will be terminated at the start of the maintenance period. Please plan accordingly.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

    Sincerely,

    Paul Gluhosky
     

  • Grace Login Issue 5/18/2022 Update 2:05pm - Resolved

    5/18/2022 - 2:05pm - YCRC staff and our storage vendor have identified and resolved the issue that caused login issues. Users are encouraged to check the status of their jobs.

    5/18/2022 - 1:30pm YCRC staff are currently investigating an issue affecting Grace login and compute nodes. Updates will be posted as more information becomes available.

  • 5/14/2022 5:25am - Power disruption affected all clusters. Please check the status of your jobs.

    05/14/2022: Due to a power outage at West Campus at about 5:25am most compute nodes rebooted.  Please check the status of your jobs.

  • Scheduled Maintenance on Ruddle

    Dear Ruddle Users,
     

    Ruddle Scheduled Maintenance

    As a reminder, scheduled maintenance will be performed on Ruddle beginning Tuesday, May 3, 2022, at 8:00 am. Maintenance is expected to be completed by the end of day, Thursday, May 5, 2022. 

    During this time, logins will be disabled, running jobs will be terminated and connections via Globus will be unavailable. The GPFS storage, including all YCGA sequencing data, will not be available. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the cluster is available.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

    Sincerely,

    Paul Gluhosky
     

  • Scheduled Maintenance on Farnam

    Dear Farnam Users,

    As a reminder, we will perform scheduled maintenance on Farnam starting on Tuesday, April 5, 2022, at 8:00 am. Maintenance is expected to be completed by the end of day, Thursday, April 7, 2022.

    During this time, logins will be disabled and connections via Globus will be unavailable. Farnam storage (/gpfs/ysm and /gpfs/slayman) will remain available on the Grace cluster. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the clusters are available.

    As the maintenance window approaches, the Slurm scheduler will not start any job if the job’s requested wallclock time extends past the start of the maintenance period (8:00 am on April 5, 2022). You can run the command “htnm” (short for “hours_to_next_maintenance”) to get the number of hours until the next maintenance period, which can aid in submitting jobs that will run before maintenance begins. If you run squeue, such jobs will show as pending jobs with the reason “ReqNodeNotAvail.” (If your job can actually be completed in less time than you requested, you may be able to avoid this by making sure that you request the appropriate time limit using “-t” or “–time”.) Held jobs will automatically return to active status after the maintenance period, at which time they will run in normal priority order.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

    Sincerely,

    Paul Gluhosky

  • Grace Maintenance Extended

    Thursday, February 3, 2022 - 5:00pm

    Due to ongoing issues with the Loomis storage system and discussions with vendors to resolve them, we are continuing to work on the Grace cluster at this time. The maintenance period is, therefore, being extended. A further email notification will be sent when the maintenance has been completed and the cluster is fully available or if there is a significant change in the status. The Loomis storage will remain unavailable on Farnam.

    We recognize the impact that this has on your work and we are working hard to resolve the problems as quickly as possible. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

  • Grace Scheduled Maintenance

    Tuesday, February 1, 2022 - 8:00am to Thursday, February 3, 2022 - 5:00pm

    Scheduled maintenance will be performed on Grace beginning Tuesday, February 1, 2022, at 8:00 am. Maintenance is expected to be completed by the end of day, Thursday, February 3, 2022. 

    In addition to the normal maintenance activities, we are working to address the Loomis performance issue.

    During this time, logins will be disabled and connections via Globus will be unavailable. The Loomis storage will not be available on the Farnam cluster. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the cluster is available.
     

Pages