System Updates Archive

  • Scheduled Maintenance on Ruddle

    Ruddle Scheduled Maintenance

     

    Dear Ruddle Users,
     

    As a reminder, we will perform scheduled maintenance on Ruddle starting on Tuesday, November 1, 2022, at 8:00 am. Maintenance is expected to be completed by the end of day, Thursday, November 3, 2022.

    During this time, logins will be disabled, running jobs will be terminated, pending jobs will be held, and connections via Globus will be unavailable.  The GPFS storage, including all YCGA sequencing data, will not be available.  We ask that you save your work, close interactive applications, and logoff the system prior to the start of the maintenance.  An email notification will be sent when the maintenance has been completed, and the cluster is available.  Held jobs will automatically return to active status after the maintenance period, at which time they will run in normal priority order.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

    Sincerely,

    Paul Gluhosky

  • Globus Unavailable 10am - 11am 10/19/2022

    Oct. 19, 2022: There will be brief outages to Globus services for  upgrades between 10AM and 11AM. Any data transfers previously in-progress will be suspended, and will resume at their previous place after the upgrades complete. No new transfer requests will be possible during the outage.

  • Farnam / Gibbs Scheduled Maintenance

    Farnam Scheduled Maintenance 10/4 - 10/6

    As a reminder, we will perform scheduled maintenance on Farnam starting on Tuesday, October 4, 2022, at 8:00 am. Maintenance is expected to be completed by the end of the day, Thursday, October 6, 2022. 

    During this time, logins will be disabled and connections via Globus will be unavailable. Farnam storage (/gpfs/ysm and /gpfs/slayman) will remain available on the Grace cluster.

    As previously announced, we will be retiring the Farnam HPC cluster early next year.  As such, this will be Farnam’s last scheduled maintenance. Farnam will be replaced with a new HPC cluster, McCleary, Yale’s first direct-to-chip liquid cooled cluster. For more information about the Farnam decommission and McCleary launch, see our website: https://docs.ycrc.yale.edu/clusters/farnam-decommission/.

    10/4/2022 - 6:45pm Gibbs Maintenance Complete

    The scheduled maintenance on the Gibbs storage system has been completed. Gibbs is now available on Grace and Ruddle. Jobs have been restarted and you may resume normal job submission.

    Please visit the status page at research.computing.yale.edu/system-status for the latest updates. If you have questions, comments, or concerns, please contact us at hpc@yale.edu.

  • 9/27/2022: One-day maintenance will affect some Grace nodes and all Milgram compute nodes

    In order to perform maintenance to the electrical supply providing power to part of the HPC Data Center at West Campus in preparation for adding additional hardware, some compute nodes will be unavailable starting on Tuesday, September 27, 2022, at 8:00 am. Maintenance is expected to be completed by the end of the day and nodes will then be reenabled.

    The impacted nodes are all compute nodes on Milgram and those with a node name starting “p08” on Grace. This affects the following commons and PI partitions, but in some cases not all nodes in the partition are affected:
     

      Milgram  
        All compute nodes
      Grace  
      bigmem 3 nodes (5 nodes unaffected)
      day 66 nodes (233 nodes unaffected)
      gpu 4 nodes with V100 GPUs
    5 nodes with RTX 2080 ti GPUs
    (22 nodes with a100, k80, p100, rtx5000 GPUs unaffected)
      gpu_devel 1 node
      mpi 88 nodes (44 nodes unaffected)
      transfer 2 nodes affected
      week 17 nodes (8 nodes unaffected)
      pi_balou 9 nodes (44 nodes unaffected)
      pi_berry 1 nodes
      pi_econ_io 6 nodes
      pi_econ_lp 5 nodes (8 nodes unaffected)
      pi_esi 36 nodes
      pi_gelernter 1 node (1 node unaffected)
      pi_hodgson 1 node
      pi_howard 1 node
      pi_jorgensen 3 nodes
      pi_levine 20 nodes
      pi_lora 4 nodes
      pi_manohar 4 nodes (11 nodes unaffected)
      pi_ohern 2 nodes (20 nodes unaffected)
      pi_polimanti 2 nodes

    The system will automatically start using the nodes again once they are available. An email notification will be sent when the maintenance has been completed, and the nodes are available.

    As the maintenance window approaches, the Slurm scheduler will not start any job on the impacted nodes if the job’s requested wallclock time extends past the start of the maintenance period (8:00 am on September 27, 2022). If you run squeue, such jobs will show as pending jobs with the reason “ReqNodeNotAvail.” (If your job can actually be completed in less time than you requested, you may be able to avoid this by making sure that you request the appropriate time limit using “-t” or “–time”.)

  • Tuesday, September 6: Yale network issues impacting Ruddle logins.

    Tuesday, September 6, 2022 - 9:00am
    Yale network issues are currently preventing logins via ssh to Ruddle.  ITS does not have an ETA for resolution at this time.  We apologize for the inconvenience. 
     
  • Resolved - Milgram Network Interruption

    09:00 9/1/2022 - RESOLVED: ITS has fixed the issue with the VPN.   If you are still having issues connecting to Milgram, please disconnect from the VPN, then reconnect to the VPN, and then try logging in to Milgram again.

    15:10 8/31/2022 - YCRC staff are currently working on identifying and resolving an issue that is affecting access to the Milgram cluster. 

  • Issues with Gibbs storage system

    Friday, August 12, 2022 - 12:00am

    Two incidents caused the Gibbs storage system to be unreachable from 10:43pm Wednesday to 12:15am Thursday, and again from 10:30pm Thursday to midnight Friday morning.  Unfortunately, this caused many compute jobs to fail on Grace, Farnam, and Ruddle.  Please check the status of your jobs.  We are working with the vendor to identify and address the root causes.  We apologize for the inconvenience.

  • Performance issues on the Palmer filesystem

    Tuesday, August 9, 2022 - 12:00am

    We aware of performance issues on the Palmer filesystem.  We are working with vendor to resolve the issues as quickly as possible.  We apologize for the inconvenience.

  • Grace Scheduled Maintenance

    Tuesday, August 2, 2022 - 8:00am to Thursday, August 4, 2022 - 5:00pm

    Scheduled maintenance will be performed on Grace beginning Tuesday, August 2, 2022, at 8:00 am. Maintenance is expected to be completed by the end of day, Thursday, August 4, 2022. 

    During this time, logins will be disabled and connections via Globus will be unavailable. The Loomis storage will not be available on the Farnam cluster. We ask that you logoff the system prior to the start of the maintenance, after saving your work and closing any interactive applications. An email notification will be sent when the maintenance has been completed, and the cluster is available.
     

  • 7/22/2022: RESOLVED: Power outage at West Campus impacted Grace and Milgram clusters.

    A large portion of Grace and all nodes on Milgram went offline on the morning of Friday 7/22/2022 due to a power issue in the data center. The system has been restored, however, many jobs died due to the issue. Please check on the status of any jobs that may have been running at that time.

Pages