System Updates Archive

  • Winter Recess

    Wednesday, December 23, 2020 - 9:00am to Monday, January 4, 2021 - 9:00am

    During the winter recess from Dec 23-Jan 3 YCRC staff will monitor the HPC clusters and user tickets. We will do our best to address critical situations, but most issues will be addressed once we return on Jan 4th.

  • Ruddle Scheduled Maintenance

    Dear Ruddle Users,
     

    Scheduled maintenance will begin Monday, December 7, 2020, at 7:00 am. We expect that the cluster will return to service by the end of the day on Wednesday, December 9, 2020. During this time, logins to the cluster will be disabled. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the cluster is available.

    As the maintenance window approaches, the Slurm scheduler will not start any job if the job’s requested wallclock time extends past the start of the maintenance period (7:00 am on Monday, December 7, 2020). If you run squeue, such jobs will show as pending jobs with the reason “ReqNodeNotAvail.” If your job can actually be completed in less time than you requested, you may be able to avoid this by making sure that you request the appropriate time limit using “-t” or “–time.” Held jobs will automatically return to active status after the maintenance period, at which time they will run in normal priority order. All running jobs will be terminated at the start of the maintenance period. Please plan accordingly.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

    Sincerely,

    Paul Gluhosky

  • Milgram Scheduled Maintenance

    Dear Milgram Users,
     

    Scheduled maintenance will begin Monday, November 2, 2020, at 6:30 am. We expect that the cluster will return to service by the end of the day on Wednesday, November 4, 2020. During this time, logins to the cluster will be disabled. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the cluster is available.

    As the maintenance window approaches, the Slurm scheduler will not start any job if the job’s requested wallclock time extends past the start of the maintenance period (6:30 am on Monday, November 2, 2020). If you run squeue, such jobs will show as pending jobs with the reason “ReqNodeNotAvail.” If your job can actually be completed in less time than you requested, you may be able to avoid this by making sure that you request the appropriate time limit using “-t” or “–time.” Held jobs will automatically return to active status after the maintenance period, at which time they will run in normal priority order. All running jobs will be terminated at the start of the maintenance period. Please plan accordingly.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

    Sincerely,

    Paul Gluhosky

  • Farnam Scheduled Maintenance

    Dear Farnam Users,

    As a reminder, we will perform scheduled maintenance on Farnam starting on Monday, October 5, 2020, at 8:00 am. Maintenance is expected to be completed by the end of day, Wednesday, October 7, 2020.

    During this time, logins will be disabled and connections via Globus will be unavailable. Farnam storage (/gpfs/ysm and /gpfs/slayman) will remain available on the Grace cluster. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the clusters are available.

    As the maintenance window approaches, the Slurm scheduler will not start any job if the job’s requested wallclock time extends past the start of the maintenance period (8:00 am on October 5, 2020). You can run the command “htnm” (short for “hours_to_next_maintenance”) to get the number of hours until the next maintenance period, which can aid in submitting jobs that will run before maintenance begins. If you run squeue, such jobs will show as pending jobs with the reason “ReqNodeNotAvail.” (If your job can actually be completed in less time than you requested, you may be able to avoid this by making sure that you request the appropriate time limit using “-t” or “–time”.) Held jobs will automatically return to active status after the maintenance period, at which time they will run in normal priority order.

  • Aug 4th Power Disruption

    Tues Aug 4 - Due to extreme weather conditions, sometime between
    approximately 2:00PM and 2:30PM there was a power interruption
    involving the clusters.  Some compute nodes were affected, which
    would have caused running jobs to fail.  Please check the status
    of any jobs that were running during that time. Contact the YCRC
    staff with any additional questions.

     

  • Scheduled Maintenance on Grace

    Dear Grace and Farnam Users,

    Scheduled maintenance will be performed on Grace beginning Monday, August 3, 2020, at 8:00 am. Maintenance is expected to be completed by the end of day, Wednesday, August 5, 2020.

    During this time, logins will be disabled and connections via Globus will be unavailable. The Loomis storage will not be available on the Farnam cluster. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the clusters are available.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.
     

  • Ruddle scheduled maintenance

    Tuesday, June 9, 2020 - 8:00am

    The Ruddle cluster will be unavailable due to scheduled maintenance until the end of day, Thursday, June 11, 2020. A communication will be sent when the cluster is available. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

  • RESOLVED Grace / Farnam Storage Performance

    RESOLVED - 17:00

    5/20/20 - 16:15 - YCRC staff are currently investigating an issue that is affecting storage performance on Grace and Farnam. 

  • Grace: Ongoing Networking Issues

    Monday, April 27, 2020 - 5:00pm

    We are currently experiencing networking issues that are resulting in connection failures between nodes and the filesystem. These issues may have resulted in unexpected job failures over last couple weeks, so we encourage you to check on your recent jobs. To mitigate the immediate issues, we have powered off a large number of newly added nodes in the day and mpi partitions. We are working to resolve the immediate issues and to restore the additional nodes as soon as posssible. We apologize for any inconvenience.

  • All Clusters: /SAY/archive is currently unavailable

    Friday, April 10, 2020 - 9:00am to Monday, April 13, 2020 - 2:45pm

    /SAY/archive (S@Y archive tier) is currently unavailable on all cluster. We are aware of the issue and working to resolve it as quickly as possible. Sorry for the inconvenience.

Pages