Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

StartEndWhat System/Service is affectedWhat is happening?What will be affected?Contact Person     
2018-07-19 08:002018-07-19 12:00LSST

Monthly maintenance (July):

  1. Dell firmware updates/reboots
  2. OS package updates/reboots
    1. including upgrades to CentOS 7.5
  3. GPFS client changes and upgrade to 4.2.3.9

  4. GPFS server upgrade to 4.2.3.9

ALL lsst-dev systems (incl. lsst-dev01, lsst-xfer, etc. as well as PDAC, verification, and Kubernetes clusters)

The following systems will remain online and unaffected:

  • lsst-daq
  • lsst-l1-*
  • tus-ats01

lsst-sysadm@ncsa.illinois.edu

 


Previous Outages or Maintenance

StartEndWhat System/Service was affected?What happened?What was affected?Outcome

Contact Person

2018-07-09 – 11302018-07-10 – 1700Campus Cluster Monitoring WebpageSET is moving set-analytics to https. This should have been a simple change to a host name, but after the change the new value was not picked up.The monitoring web page gave a loading circle that never resolved to anything.Set up a Grafana instance for the display of the Campus Cluster monitoring.help@campuscluster.illinois.edu
2018-06-282018-07-09NebulaNebula was taken offline to repair the filesystemAll Nebula servicesNebula is performing well nownebula@ncsa.illinois.edu
2018-06-29 -- 1300hrs 2018-07-08 – 1400hrsBlue Waters Nearline Endpoint
Due to very high demand for data retrieval from Nearline, a pause rule is in effect to allow manual task scheduling. You may submit tasks as normal and they will be run as quickly as possible.
Tasks submitted to Globus will start in a paused state but will be released to run, at the earliest possible time, based on resource availability.Backlog of file stages was cleared and endpoint pause rule removed.hpssadmin@ncsa.illinois.edu
1800
2018-07-02
0600
2018-07-06
Access to NPCFFor the July 4th UIUC fireworks show, the parking Lots E14 and E14-shuttle will be closed from 6:00 p.m. Monday, July 2nd, through 6:00 a.m. Friday, July 6th. No parking will be allowed in these locations at any time during this period.  Please do not park in the NPCF dock area - use the shuttle buses, or park in lot E46 (south on Oak St.).Parking facilites for NPCF Parking is back to normal 
2018-05-03 14:302018-06-28 09:00iForge gpu queueboth nodes in the general 'gpu' queue were offline due to issues with the GPUsiForge 'gpu' queue could not be usedTried driver updates and engaged with vendors; ultimately got one node working with 4 M40 GPUs rather than the previous 2 K80 GPUs; continue engaging with vendors to get the other node working but queue is now available. 
0800 2018-07-021200 2018-07-02Blue Waters NearlineOne tape library (of four) will be powered down for hardware maintenance (replacement of tape import/export module).Access to tapes in the affected library will be blocked until the system returns to service. Users staging data may see delays in accessing data until the library is back online.Work was completed with some delay (scheduled to complete by 0930) due to a failed SD card (used for storing and loading library geometry).hpssadmin@ncsa.illinois.edu
2018-06-27 9:002018-06-27 1:00LSST - k8s lspdevkub001 unplanned reboot and kub004 ran out of memory.lspdev JupyterHub

Nodes/Services rebooted.

Kubernetes pods restarted.

lsst-admin@ncsa.illinois.edu
2018-06-27 08:302018-06-27 11:49SlackSlack is reporting connectivity issues on their status page (https://status.slack.com/)SlackSlack reports, "workspaces should be able to connect again"feedback@slack.com

2018-06-23
19:44

2018-06-23
19:59

Blue Waters Scratch FilesystemTop of Rack network switch died in rack 8. Cray onsite and performed a work around and will replace Monday. Sonexion rack 28 lost mind and was rebooted.Partial scratch outage of ost169-179bypassed faulty switch, rack 28 sonexion rebooted. faulty swich replaced Monday 25th.bw-admin@illinois.edu
tbouvet
2018-06-21 -- 1200hrs 2018-06-23 -- 1045hrsBlue Waters Nearline Endpoint
Due to very high demand for data retrieval from Nearline, a pause rule is in effect to allow manual task scheduling. You may submit tasks as normal and they will be run as quickly as possible.
Tasks submitted to Globus will start in a paused state but will be released to run, at the earliest possible time, based on resource availability.Many tasks were pushed through the system by manually ordering them to reduce tape drive competition. Endpoint pause rule removed and all tasks resumed.hpssadmin@ncsa.Illinois.edu
2018-06-21 08:002018-06-21 09:35LSST

Monthly maintenance (MayJune):

  • pfSense firewall update
  • OS package updates/reboots for CentOS 6.9 servers (lsst-web, lsst-xfer, lsst-nagios)
  • Slurm update (lsst-dev01, lsst-verify-worker*)
  • Update host firewalls on GPFS servers
  • iDRAC configuration updates on lsst-dev01 and ESXi hosts

CentOS 6.9 servers:

  • lsst-web
  • lsst-xfer
  • lsst-nagios

Slurm/verification cluster

Other impact was not expected but unexpected issues could have lead to connectivity issues for other hosts or downtime for lsst-dev01 or hosted VMs

Maintenance was completedlsst-sysadm@ncsa.illinois.edu
2018-06-20 14:002018-06-20 19:00Campus ClusterRolling reboot of the core IO servers to move GPFS from 4.2.3.8 to 4.2.3.9 for CentOS 7.5 support; No downtime occurredSuccessful UpgradeCluster now supports CentOS 7.5 clientsset@ncsa.illinois.edu
2018-06-182019-06-20 7pmNebulaNebula was shut down to fix broken filesystems.All Nebula servicesNebula is up and running again. Please contact nebula@ncsa.illinois.edu if you still see issues.nebula@ncsa.illinois.edu
2018-06-19 08:002018-06-19 12:00LSST L1 Test Stand

Scheduled Maintenance:

  • BIOS firmware updates
  • Puppet and firewall changes (including support of SAL unicast/multicast traffic)
  • OS package updates (staying with CentOS 7.4)

Level One Test Stand, including:

  • lsst-daq
  • lsst-l1-*
 Maintenance completed successfullylsst-sysadm@ncsa.illinois.edu
2018-06-18 07:002018-06-18 09:30vSphere & Various VMsTwo of our hosts went down with network interface errors.Multiple VMs hosted on those nodes (incl. Fileserver, ncsa-print, and subversion)Both hosts are back online as well as all VMshelp+its@ncsa.illinois.edu
2017-06-16 22:18:322017-06-17 08:10:00

cforge

PBSPro server was hung on cfsched

Job scheduling and job submission were failing.restarted PBSPro server on cfschedJim Long
2018-06-15 1330hrs2018-06-15 1530hrsBlue Waters NearlineReplacement of a tape robot transporterThis work is not expected to impact operations. The library system will continue to operate with a single transporter but mount times may be somewhat longer until the second unit is returned to service.hpssadmin@ncsa.illinois.edu 
2018-06-12 04:3010:00Blue WatersThunderstorms have resulted in a power interruption. This outage impacts both the compute nodes and all filesystems. Therefore, a full reboot will be necessary.Return to service is estimated to be approximately 10 am Central time.Blue Water in totalFull reboot 
2018-06-12 ~03:452018-06-12 ~06:00Campus ClusterMany compute nodes rebooted. No system on UPS was affected, and some compute nodes remained up. Facilities at ACB report that there were no power events this morning or last night, but this seems the most likely cause.Many compute nodes, but not all. Jobs on the nodes that rebooted were lost.Nodes rebooted at a similar time, and many returned in a state unsuitable to run jobs. Rebooting in smaller groups got everything working again.help@campuscluster.illinois.edu
2018-06-12 ~03:402018-06-12 ~06:30iForge

A storm caused a brief power event which impacted:

  • big_mem queue
  • skyake queue
All nodes in the big_mem and skylake queues were rebooted by the power event.Nodes rebooted on their own and were marked back online in the scheduler by around ~6:30am. 
2018-06-12 ~03:402018-06-12 09:00LSST

Storm caused power event which impacted:

  • Kubernetes Commons / lsst-lspdev
  • 75% of verification cluster compute / Slurm

 

The following nodes rebooted because of the power event:

  • all kub* nodes (causing outage of Kubernetes Commons / lsst-lspdev)
  • 75% of verify-worker* nodes (partial outage of Slurm / verification cluster compute nodes
  • verify-worker nodes were put back online in Slurm around 06:10
  • Kubernetes Commons resumed service by around 09:00
lsst-sysadm@ncsa.illinois.edu
2018-06-11 08:302018-06-11 8:35Campus Cluster ADSVlan changes on campus clustercampus cluster - Active data storage (ADS)Maintenance completed successfullyhelp+neteng@ncsa.illinois.edu
2018-06-07 06:302018-06-07 14:00Blue WatersThe boot node crashed requiring the system to be rebooted. File system and ESLogins remain up.All running jobs were lost, no new jobs were started until system is return to service, Torque was updated to ver. 6.1.2.bw-admin@ncsa.illinois.edu 
2018-06-01 00:502018-06-01 03:50Blue Waters/var space filled up by additional logging in Moab to troubleshoot job slide issue.PBS server went down due to no space in /varZipped and moved old Moab logs to lustre file system to free up /var space, then restarted PBS server.bw-admin@ncsa.illinois.edu
2018-05-31 14:002018-05-31 14:10NCSA Open SourceRetirement of both HipChat and FishEye/CrucibleServices will be shutdown and archived.Services are disabled and will be archived in a month. 

2018-05-31 08:00

 

2018-05-31 11:55NCSA ITS vSphere vCenterITS vSphere vCenter server will be upgraded to the latest VMware vCenter 6.7 All VMs will remain online during the maintenance, but management through vCenter will be offline during the upgrade.Successful upgrade to VMware vCenter 6.7.help+its@ncsa.illinois.edu
2018-05-23 06:552018-05-24, 1900hrsCampus Cluster File SystemA failure of both disk array controllers serving the CC file systems resulted in abrupt loss of access to the underlying storage. One array controller was identified as broken while the storage system was brought back up on the remaining controller for inspection and analysis. A thorough check of the file systems and storage devices was started. At 1100hrs May 24th the replacement array controller arrived and was installed. After further testing to assure system stability, the file systems were brought back online and released to the cluster admins.All campus cluster file systemsNormal cluster operations were resumed. Investigation into the root cause is ongoing with the cooperation of the system manufacturer. 
2018-05-21DNS1/2There were a few reports of intermittent DNS lookups failures/slowness Firewall state tables resources were being exhausted. Limits for those state tables have been increased. This appears to have resolved the problem. No further reports of the issue, after making the adjustment. help+neteng@ncsa.illinois.edu 
2018-05-24 10:55am

2018-06-24 11:08am

ifsm.ncsa.illinois.edu System is being upgraded and rebootedNo services should be affectedyum upgrade and reboot  
2018-05-17 8:002018-05-17 15:00NPCF-Core-EastThe hardware and firmware on the core east router was be upgradedTraffic rerouted through npcf-core-west during the maintenance window. There was an unexpected outage for about 10 mins which impacted network connectivity throughout NCSA.Upgrade on core-east was completed successfully. No further network outages are expected. 
2018-05-09 7:002018-05-09 17:40dns1.ncsa.illinois.eduEnabling BIND on ipv6 and enabling a firewall on the serverNo impact is expected.Maintenance was completed. 
2018-05-17
08:00
2018-05-17
13:30
LSST

Monthly maintenance (May):

  • GPFS server & client updates, plus nosuid mounting
  • Physical firewall changes in NPCF for new vLANs
  • BIOS firmware updates
  • OS updates
  • Update of puppet-stdlibs module
All systems (except lsst-daq, lsst-l1-*, & tus-ats01) were unavailable for maintenance.

Maintenance was extended until 13:30 and then completed.

External Grafana monitoring (monitor-ncsa.lsst.org) was offline until 14:25 due to storage rebuild on lsst-monitor01.

 
2018-05-17 10:132018-05-17 10:18Core OutageDuring core router maintenance the incorrect core router was powered off.Network connectivity across NCSA was affected.The core router was powered back on, verified and brought back into service. 
2018-05-16 08:00

 

2018-05-16 17:40Campus Cluster

Monthly maintenance (May)

  • GPFS upgrade to 4.2.3.8
  • FW upgrade on Juniper switches
  • OS updates
  • Add 4 more 40G cables for ccioe nodes for redundancy
Entire system was unavailable for maintenance.Maintenance complete, all tasks complete. 
2018-05-16  1100hrs2018-05-16 1300hrsADSPlanned Campus Cluster network upgrades also impacted access to ADSAll ADS storage exports became unreachableEric has notified us that the networking maintenance is complete and ADS customers are able to access their storage again. 
21 Mar 201814 May 2018openxdmod.ncsa.illinois.eduAn update to Torque broke the updates of XDMoD. openxdmod.ncsa.illinois.edu was offline while the system it resided on was updated, all the dependency software was installed, and the latest version of XDMoD was installed. Then all the data had to be re-imported.Software updatingService restored with updated software. 
2018-05-08 00002018-05-09 0015NCSA Storage Condo
One node ran out of memory, causing a deadlock in GPFS. During deadlock recovery, GPFS shut down on multiple nodes. Upon restart of the cluster, a different metadata server had a check on its PCI bus, forcing another unmount. All file systems but one were recovered. While recovering the last one, one of the Roger NetApp storage arrays started throwing errors, requiring a power cycle of the controller and disks, prompting a final recovery of the last file system.
Condo file systems and services.All file systems recovered and services restored. 
2018-05-08 07:002018-05-08 07:40iForgeQuarterly Maintenance (20180508 Maintenance for iForge)All systems were unavailable during the maintenance.Planned maintenance completed successfully 
2018-05-08 8:002018-05-09 8:00NPCF-Core-WestThe hardware and firmware on the core router will be upgradedTraffic will be rerouted through npcf-core-east during the maintenance window. No impact is expected.The hardware and firmware was upgraded on npcf-core-west without incident. Traffic has been successfully failed back. 
2018-05-03 08:452018-05-03 10:15NCSA WIKI, JIRA, services that rely on NCSA LDAPLarge amount of connections from two particular servers were hitting LDAP, causing the slow-down that in term caused timeouts for various applications using LDAP authentication. Blocking the cuplrit servers remedied the situationNCSA WIKI, NCSA JIRA, other applications that rely on NCSA LDAP authentication.Culprit servers were blocked 
9:00am9:25amsyslog-sec.ncsa.illinois.eduout of cycle patching of Security Syslog collectors to address CVE-2018-1000140Load balance fail over to secondary collector, RELP will be buffered.

relay-01 was updated and loadbalancer failed back.

 

 
4/25 14:004/25 15:00MREN WAN CircuitWAN circuit testing.Traffic will be re-routed over an alternate peering during the test period.The MREN circuit was brought back in to production. 
2018-04-24 12:302018-04-24 16:00NCSA jabber servicejabber was down while we repaired its authorization configuration.jabber.ncsa.illinois.edu wasn't accepting jabber loginsjabber working again. 
2018-04-24 09:102018-04-24 09:50LSSTincreased LDAP timeout to 60 seconds in sssd.conf to fix problems with long login times and failure to start batch jobskub*, verify-worker*

sssd.conf updated, sssd restarted

verify-worker nodes were drained during the change

affected nodes may have slow LDAP response times for a short while (due to local cache needing rebuilt)

 
04/18/2018 10:3004/18/2018 11:30ICCP April MaintenanceReplaced 4x10G links from cc-core0 to carne. Updated BIOS on remaining parts of Cluster nodes.No outage.Completed without any outage. 
04/18/2018 10:3004/18/2018 11:30ICCP core switchesOne of the 4x10G links from cc-core0 to carne had incrementing errors and has been administratively down to prevent those errors from affecting traffic. There was a scratched fiber that earlier diagnosis had revealed, so we replaced the fiber during this ICCP PM.Nothing, all traffic rerouted through cc-core1The errors are still incrementing, but we've narrowed down the remaining options for what might be going on. 
4/12 09304/12 1830ADS NFS/SambaThe ESXi Hypervisor server had an error on it: 'A PCI error requiring a reboot has occurred.'.ADS NFS/Samba/GridftpThe server was rebooted, the error cleared and all systems/services were restarted. 
4/11 03:00 p.m.4/11 03.15 pmNetactNetact code was updated. Going forward new office activation names will have "-ofc" appended to them.No service impact to Netact.Change was successfully implemented. Netact remained in service during and after the change. 
4/11 9:004/11 10:00LSST NPCF FirewallPrimary firewall will be upgraded to use FRR instead of openBGP.No impact is expected.  The firewalls do not need to be failed over and no interruption in traffic flow is anticipated.Firewall was successfully migrated.  No downtime occurred. 

4/10

17:00

4/10

18:00

dns1.ncsa.illinois.eduOS Patching and BIND updatesdns1 (secondary DNS server) will be rebooted to apply patches. DNS2 will remain up.DNS1 OS patching is completed. BIND was upgraded to 9.11. BIND is only bound currently to its ipv4 interface. 

4/10

15:00

4/10

16:00

dns2.ncsa.illinois.eduOS Patching and BIND updatesdns2 (secondary DNS server) will be rebooted to apply patches. DNS1 will remain up. An IPv6 address will also be added to system in preparation for a broader IPv6 DNS rollout.DNS2 OS was patched. BIND was upgrade to 9.11. IPv6 Address was also enabled on the server and BIND is listening on that address. 

4/04/2018

16:00

4/04/2018

17:00

MREN WAN CircuitPort MoveTraffic will be re-routed over an alternate peering during the maintenance. The port was moved and the circuit was brought back into service without issue. 
04/04/2018
16:17:00
04/04/2018
16:42:00
LDAPLDAP process crashedAuthentication to LDAP-backed servicesLDAP was upgraded and restarted 

4/04/2018

16:00

4/04/2018

17:00

MREN WAN CircuitPort MoveTraffic will be re-routed over an alternate peering during the maintenance.The port was moved and the circuit was brought back into service without issue. 
 

3/29/2018

17:00

MREN WAN CircuitWAN circuit testing.Traffic will be re-routed over an alternate peering during the test period.Testing was completed and the circuit was brought back into service. 
2018-03-21 08:002018-03-21 17:30Campus Cluster manage server and compute nodes except DES and MWT2Deploying new management server, upgrading to Torque 6.1.2 and Moab 9.1.2. Bios update. Configuration changes on GPFS servers. Tech Service CARNE code upgrade.Scheduler down. User access disabled

New management server is up with Centos7. Installed Torque 6.1.2 and Maob 9.1.2. Bios update are done on most nodes. Configuration changes on GPFS done. Tech services CARNE code upgrade done.

 
2018-03-16 1:00pm2018-03-16 5:45pmISDA + NCSA OpenSource

Security patches of VM servers as well as backend filesystem

Updates of Bamboo, JIRA, Confluence, BitBucket and CROWD 

All systems will be unavailable for a brief period of time.

During updates of OpenSource services part of OpenSource will be offline for up to an hour.

Updated fileserver (brief struggle with zfs and kernel updates). Updates of proxmox servers, Updated JIRA, Confluence, ButBucket and CROWD. Bamboo will be done later this weekend. 
2018-03-12
9:00am
2018-03-12
5:00pm

Nebula Openstack cluster

Security and filesystem patchesAll instances and Nebula services were unavailableFilesystem updates and security patches were applied. Filesystem is more responsive, but ~20 instances are repairing from problems that occurred before the outage. 
2018-03-15
12:20
2018-03-15 16:20LSST

Lingering issues on select nodes following March PM

  • lsst-qserv-master01 - cannot mount local /qserv volume
  • lsst-xfer - issue w/ sshd
  • lsst-dts - issue w/ sshd
  • lsst-l1-cl-dmcs - unknown issue
  • lsst7 - issue w/ sshd

Following resolved by 13:23:

  • lsst-qserv-master01
  • lsst-xfer
  • lsst-dts
  • lsst-l1-cl-dmcs

Resolved by 16:20:

  • lsst7
 
2018-03-15
08:00
2018-03-15
12:20
LSST

March maintenance:

  • GPFS server updates and configuration of additional NFS/Samba services
  • Urgent Firmware updates
  • Increase size of /tmp on lsst-dev01
  • Hardware maintenance/memory increases on select servers/VMs
  • Release of refactored Puppet code for NCSA 3003 servers
  • OS updates
  • Recabling servers in NCSA 3003 to new switches
All systems were unavailable for maintenance.Completed and most systems back online. Lingering issues for lsst-qserv-master01, lsst7, lsst-xfer, lsst-dts, and lsst-l1-cl-dmcs are being tracked in a separate status event. 

2018-03-14

12:00

2018-03-14

12:35

Remote Access VPNAn issue with authentication for the VPN has occurred.Any new connections will not be established. Existing connections are unaffected.Authentication services were restored. 
2018-03-09 10:08am2018-03-09 11:00amCampus ClusterAccording to IBM, cc-mgmt1 was a culprit on halting communication across the cluster during the GPFS snapshot process.User can't login or access to filesytem.Rebooted cc-mgmt1 and restarted services (RM & Scheduler). 
2018-03-09 06:052018-03-09 08:00public-linux, www.ncsa.illinois.edu, & events.ncsa.illinois.eduA routine kernel upgrade resulted in failure of the OpenAFS client on these servers.OpenAFS storage was unavailable on these servers, resulting in the website failures.Resolved. Packages were updated and OpenAFS reinstalled. 
2018-03-07 15:002018-03-07 16:10LSSTqserv-db12 had one failed drive in the OS mirror replaced but the other was presenting errors as well so the RAID could not rebuild. The Qserv system would have been unavailable during this maintenance.qserv-db12The node was taken down for replacement of the 2nd disk, to rebuild the RAID in the OS volume, and to reinstall the OS. 

2018-03-07

14:00

2018-03-07

14:40

ESnet PeeringThe connection servicing our direct peering with ESnet will be moved during this window.Connections will be rerouted over a redundant peering. No service impact is expected.The connection was successfully migrated and the peering with ESnet was brought back into service without issue. 

2016-03-06

0100

2016-03-06

1040

WAN Connectivity DegradedThe router servicing several of our WAN connections is currently in a degraded state.Traffic has been gracefully rerouted. No user facing connectivity issues have been reported.Graceful failover to the backup routing engine cleared a fault condition and affected peerings were re-established. 
2018-02-27 07:152018-02-27 09:10Campus Cluster schedulerScheduler become unresponsiveJob submission & starting new jobsRebooted the node, restarted RM & Scheduler. 
2018-02-26 06:002018-02-27 01:35All Blue Waters ServicesSecurity Patch CLE, SU26 Lustre patchAll Blue Waters resources are unavailableBlue Waters returned to service at 1:35AM 27th Feb, with HPSS returned earlier at 10PM 26th Feb. 
2018-02-23 16:302018-02-23 16:30Kerberos Admin serviceKDC configuration was modified to allow creation of service principles that can create and modify host and service principles.kadmin service was unavailable for 1 second while new config was read.

We can now delegate to group or users the ability to create and manage host keys and service principles.

 
2018-02-23 08:002018-02-23 09:00LSST Puppet ChangesRolled out significant logic and organization of the Puppet resources in NCSA 3003 data center in order to standardize between LSST Puppet environments at NCSA. We had done extensive testing and did not expect any outages or disruption of services.

No interruption of services.

Changes being applied to: lsst-dev01, lsst-dev-db, lsst-web, lsst-xfer, lsst-dts, lsst-demo, L1 test stand, DBB test stand, elastic test stand.

Updated successfully with no interruption of availability or services. 
2018-02-21 13:302018-02-22 00:39ESnet 100G Peering DownThere is a suspected fiber cut between Urbana and Peoria on ICCN optical equipment. Our 100G direct WAN path to ESnet rides over this optical path and is thus currently down. The fiber vendor has identified the source of the problem (high water caused the fiber to be pulled out of a splice case)Nothing. All traffic destined for ESnet or resources that would normally take the ESnet WAN path will reroute through our other WAN pathsRepaired. 
2018-02-21 08:002018-02-21 20:00Campus Cluster

Campus Cluster February Maintenance

  1. Applying security patches & OFED upgrade
  2. Testing/tuning metadata performance
  3. Troubleshoot/upgraded code on cc-core switches
 All systems were unavailable

Completed partially and following items are reschedule for next maintenance.

  1. Deploying new scheduler (due to a system stability)
  2. Upgrading Torque 6.1.2 and Moab 9.1.2 (not enough time for testing after release)
  3. Maintenance on CARNE router (bug in the code)
 
2018-02-05 10:452018-02-21 13:30ICCP Networking - Outbound

A hardware failure on one of the two core switches for ICCP caused that switch to enter a degraded service mode and eventually fail completely. This was also combined with software bugs that caused looping of packets between the two cores in the MC-LAG. The other core was still functioning properly and was providing connectivity for all ICCP/ADS/DES systems normally for the duration of the degraded service time period. A hardware replacement RMA was initiated. The hardware came in but the hardware alone did not fix the issue. We then waited until a ICCP PM where we could test things without interruption of service and we upgraded the code and put in some bug mitigation configuration changes. These things combined solved the issues.

Nothing as far as production. During the period where cc-core0 was down, aggregate bandwidth outbound was 40Gbps instead of the normal 80Ghps.As of now the cores are both in production and stable. 
2018-02-16 12:002018-02-16 12:30IPSEC VPNThe appliance servicing various IPSEC VPN connections was patched.NothingPatch was successful utilizing the failover capability of the VPN cluster to mitigate any service interruptions 
2018-02-15 08:002018-02-15 13:00LSST

February maintenance:

  • Updating GPFS mounts to access new storage appliance
  • Rewire 2 PDUs at NCSA 3003
  • Switch stack configuration changes at NCSA 3003
  • Routine system updates
  • Firewall maintenance NPCF
  • Updates to system monitoring
 All systems were unavailable.Completed and all systems back online. 
2018-02-13 08:002018-02-06 09:00Certificate System Firewall 2Upgrade software to current production version. No interruptions to service expected CA servicesFW upgraded - services were interrupted due to failed routing service. 
2018-02-13 06:002018-02-13 06:30AnyConnect VPNPatches are being applied to the AnyConnect VPN applianceAccess to the NCSA AnyConnect VPN will be unavailable.The VPN has been patched and client connections have been re-established. 
2018-02-10 02:002018-02-10 10:35Campus ClusterGPFS snapshot hang and lock the filesystemAll systems were inaccessible. Lost running jobs.Gather information for IBM, bounce the filesystem and reboot the cluster 
2018-02-06 07:002018-02-06 17:35iForgeQuarterly Maintenance (20180206 Maintenance for iForge)All Systems were unavailable during the maintenance.Planned maintenance completed successfully 
2018-02-06 08:002018-02-06 09:00Certificate System Firewall 1Upgrade software to current production version. It is expected that current connections will be interrupted and a retry will be required.
  • cilogon.org
  • idp.ncsa.illinois.edu
  • idp.xsede.org
  • NCSA TFCA Myproxy
  • XSEDE Myproxy
  • Completed
 
2018-02-01 16:302018-02-01 16:45sslvpn.ncsa.illinois.eduWe are rebooting our VPN appliances to mitigate a critical security vulnerability that allows for remote code execution exploits. That vulnerability is described here: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180129-asa1Certain Industry partners' site-to-site VPNsVPN rebooted without incident. Service was restored at 4:34PM. 
2018-02-01 16:302018-02-01 16:45vpn.ncsa.illinois.eduWe are rebooting our VPN appliances to mitigate a critical security vulnerability that allows for remote code execution exploits. That vulnerability is described here: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180129-asa1Certain Industry partners' site-to-site VPNs and the NCSA remote access VPN service will be down during the maintenance. Any users connected to the NCSA VPN at the time of the maintenance will lose connectivity.VPN rebooted without incident. Service was restored at 4:34PM. 
2018-01-29 10:052018-01-29 10:10LSST verify worker nodes and lsst-devA network flap on the LSST network caused GPFS ejection of some nodes. Network and security is investigatinga few of the LSST nodes for 2-5 minutes and 2 jobsQualys scan time frame changed and investigatino continues. 
2018-01-29 12:272018-01-29 12:31NCSA Jabber serviceJabber service was restarted to install a new SSL certificate.NCSA Jabber was down momentarilyNCSA Jabber restarted with new SSL certificate 
2018-01-26 13:002018-01-26 13:15LSST NFS service slowdownA cron for lenovo system cleanup was run, and caused the lenovo box to showdown services. The NFS service was starved.lsst-dev NFS showed stale mountscron deleted, and re-written. 
Wed 1/24/2018 13:35Wed 1/24/2018 14:55LSST NFS serviceWe were notified by NCSA security team that there was a stale NFS mount on one of the LSST test nodes. NFS services stopped workingAll NFS mounts for LSST systems such as lsst-demo and lsst-SUI were not workingNFS server was rebooted. 
Tue 1/23/2018 23:00Wed 1/24/2018 01:25Condo storage servicesHit a known bug in GPFS 4.2.0.4 for quota management.All Condo services from 11pm to 1:25 amNeed to upgrade to a newer level of GPFS, but for now we have lowered frequency of the check_fileset_inodes script 
2018-01-22 07:002018-01-22 13:05Blue Waters Compute NodesBlue Waters compute nodes were bounced to resolve issues caused by previous home file system outage (due to bad OST)Compute nodes were down, scheduler was paused.Compute nodes were bounced successfully and returned to full service. 
2018-01-21 08:422018-01-21 11:30Netsec-vc switch stack - FPC 4Switch member 4 of the netsec switch stack was down. Severe filesystem corruption occurred on the primary partition.Any hosts connected to member 4 of that switch that were not redundantly connected to other switches in the stack.The switch was repaired by doing a full reformat/reinstall of JunOS. Everything is back into production. 
2018-01-20 22:002018-01-21 0300Condo file systemsBringing the Roger disk into the condo, commands executed from the Roger GPFS servers caused the cluster to arbitrate for GPFS servers.All condo file systems mounted on nodes.The SSH configuration was changed on the Roger GPFS servers to include the Condo GPFS server IP's. All file systems were returned to normal with no other problems and no remounts required. 
2018-01-18 17:002018-01-19 15:00ISDA Hypervisors, NCSA Open SourceHypervisor updates.All systems were down for short amount of times as hypervisors rebootedAll patches applied. 
2018-01-18 00:002018-01-18 24:00Campus ClusterCopying all data to new filesystem.  Deploying new Storage (14K).  Dividing cluster into two (IB & Ethernet).  Upgrading GPFS to 4.2.3.6.  Deploying new management node and new image server (if time permit).  Applying Security patches to compute nodes(no FW update at this time).All systems unavailable.New Storage System was brought online, additional capacity and performance was added. 

2018-01-18 18:40

2018-01-18 23:00

LSSTLSST Firewall outage in NPCF. Both pfSense firewalls were accidentally powered off.

PDAC (Qserv & SUI) and verification clusters were inaccessible, as well as introducing GPFS issues across many services, e.g. lsst-dev01.

The pfSense firewall appliances were power cycled and services restored. 
2018-01-18 12:582018-01-18 14:10Code42 Crashplan backup systemCode42 Crashplan server were upgraded to latest JDK and Code42 6.5.2.Clients were unable to perform restores or push files into backup archive from roughly 13:35 - 13:55Code42 servers are now running latest security updates to the crashplan service. 
2018-01-18 08:002018-01-18 10:00LSSTMonthly OS updates, network switch updates, firmware updates, etc.All dev systems unavailable. Qserv and SUI nodes will remain available.
Status
colourGreen
titleComplete
 
2018-01-17 10:352018-01-17 13:00RSA Authentication Manager ServersUpgraded to Authentication Manager 8.1sp1p7No systems should have seen any impactLatest security patches are applied. 
2018-01-12 06:002018-01-12 10:00Decommission NCSA Rocket.chatThe old NCSA Rocket.chat service was shutdown.

Any archived conversations or content are no longer be available to users.

 

NCSA Rocket.chat service was shutdown and redirected to NCSA @ Illinois Slack. 
Friday, Jan 12th,  0000-0600 CSTInternet2 Engineers from Internet2 will be migrating our BGP peering with I2's Commercial Peering Service (CPS) to a new location. Small disruptions may occur with the maintenance for the CPS service, but no user traffic disruptions should occur.None, Alternatives routes are present.noneMaintenance was completed successfully. 
2018-01-11 08:002018-01-11 13:30LSSTCritical patches on lsst-dev systems (incl. kernel updates).All systems unavailable.

 

Status
colourGreen
titleComplete

 
Thursday, Jan 11th, 0000 CSTThursday, Jan 11th, 0400 CST

Connectivity to Internet2 and backup LHCONE peerings - ICCP and MWT2 respectively

Engineers from Internet2 performed maintenance that affected certain BGP peerings that exist on the device that is ICCP/MWT2's upstream router, CARNE. Specifically, both the 100G Internet2 peering and the Internet2 LHCONE peering on CARNE were disrupted during this timeframe. MWT2 currently gets to LHCONE through CARNE's ESnet peering, which was fully functional. They also were able to get to UChicago through CARNE's OmniPoP 100G peering. As for ICCP, traffic to/from Internet2 based routes rerouted through the ICCN. Nothing was reported to be service impacting by this maintenance from neither ICCP nor MWT2.Successful maintenance was completed. 
2018-01-08 10:472018-01-08 11:30NebulaStorage nodes lost networkingAll nebula instancesStorage nodes were brought back online, instances were rebooted 
2018-01-02
09:00
2018-01-05 17:00NebulaNebula was shut down for hardware and software maintenance from January 2nd, 2018 at 9am until January 5th, 2018 at 5pm. Spectre and Meltdown patches were applied, as well as all firmware updates, OS/distribution updates, and the filesystem was upgraded.All systems were unavailable.Faster system that is now homogenous, so OpenStack upgrades are now possible. 
2018-01-04 17:002018-01-05 20:00Blue WatersOne OST hosting the home file system has three drives failed simultaneously.Portion of home file system (with data on the affected OST) are not accessible.

Repair works were carried out on the failed OST. Scheduler continued to operate but restricting only jobs not affected by the failed OST to start.

Full operation resumed after successful recovery of the failed OST.

 
2017-12-20 08:002017-12-20 10:00LSST(1) Firewall maintenance (08:00-09:00) and (2) migration of NFS services (08:00-10:00).

Firewall maintenance: There should be no noticeable effect but scope of service includes most systems at NPCF (including PDAC, SUI, and Slurm/batch/verify nodes).

Migration of NFS services: SUI and lsst-demo* nodes.

Maintenance completed without issues. 
2017-12-14 06:002017-12-14 20:30LSSTMonthly OS updates, network switch updates, firmware updates, etc.All systems unavailable.

All systems back online.
We ran into issues with the policy based routing on the LSST aggregate switches in NPCF that caused the outage to be extended longer than planned.

 
2017-12-13 09:002017-12-13 11:00JIRA UpgradeUpgraded JIRA to version 7.6 from 7.0NCSA JiraSuccesfully upgraded 
2017-12-13 06:302017-12-13 07:39NCSA JabberAttempted to upgrade Openfire XMMP jabber software.NCSA Jabber was unavailable during the upgrade.The upgrade failed. Jabber is available, but still running the old version. The upgrade will be rescheduled. 
2017-12-11 10:002017-12-11 16:00Unused AFS fileserver were upgraded to 1.6.22After moving all volumes to servers updated on 2017-12-07, the now unused AFS servers were upgraded to OpenAFS 1.6.22.No impact to other systems as they were unused at the time they were upgraded.All of NCSA's afs cell is running on OpenAFS 1.6.22 
2017-12-09 03:002017-12-09 07:42BlueWaters PortalThe BlueWaters portal software crashed. Automated monitoring processes did not restart it correctly.The BlueWaters portal website was unavailable.The BlueWaters portal service was manually restarted and the website is available. 
2017-12-09 1000hrs2017-12-09 1400hrsGlobus Online (Globus.org)Please be advised that the Globus service will be unavailable on Saturday, December 9, 2017, between 10:00am and 2:00pm CST while we conduct scheduled upgrades. Active file transfers will be suspended during this time and they will resume when the Globus service is restored. Users trying to access the service at globus.org (or on your institution's branded Globus website) will see a maintenance page until the service is restored.
All NCSA Globus endpoints.  
2017-12-072017-12-07Unused AFS file servers were upgraded to 1.6.22Three unused AFS fileserver were upgraded to the latest 1.6.22 release of OpenAFSNo impact to other systems as they were unused.These AFS fileserver can no longer be crashed by malicious clients. 
2017-12-072017-12-07AFS database servers were upgraded to 1.6.22The three database servers were upgraded to the latest 1.6.22 release of OpenAFSNo modern clients noticed the staggered updates.These servers can no longer be crashed by malicious clients. 
2017-12-05 16.002017-12-05 16:20dhcp.ncsa.illinois.eduNCSA Neteng will be migrating the DHCP server VM to Security team's VMware infrastructure.

- Hosts on the NCSAnet wireless network might be impacted.
- Any activated hosts that might be on the roaming range might be impacted.
+ Illinoisnet and Illinois_Guest wireless will be available at ALL times.
+ Wired network connection will be available throughout the maintenance window.

Maintenance was completed successfully and services are running as expected. 
2017-12-02 09:302017-12-02 11:45NCSA opensourceUpgrade of Bamboo, JIRA, Confluence, BitBucket FishEye, and CROWDSub services of opensource can be down for a short time.All services upgraded and running as normal. 
2017-11-20 18:21

2017-11-29 14:30

ROGER OpenStack clusterI/O issues highlighted that GPFS CES NFS servers probably shouldn't run 400+ days without rebootROGER's OpenStack and the various services which were hosted therein, including JupyterHub Serverreboot of all nodes, including CES servers as well as the reboot of all hypervisors (with the fallout being one node required fsck and second reboot and another node/hypervisor is still unavailable) cleared most of the problems. I/O contention was felt as many instances were simultaneously attempting to start/restart. instances that were housed on the unavailable node are being migrated to another hypervisor 
2017-11-21 9:00

2017-11-22 14:00

Open Source

ISDA servers

Update the fileserver that hosts VM's
all the XEN servers.

NCSA Open Source unavailable
Most of ISDA servers unavailable


 

Network issues delayed updates
All hosts updated and everything back to normal.

 
2017-11-21 16:002017-11-21 16:40Code42 CrashplanThe Code42 crashplan infrastructure was upgraded to version 6.5.1 to apply security and performance improvementsClients transparently reconnected to servers after they restartedNow running on Code42 version 6.5.1 
2017-11-20 9:002017-11-20 16:38Nebula Openstack clusterNebula OpenStack cluster was unavailable for emergency hardware maintenance. A failing RAID controller from one of the storage nodes and a network switch were replaced.Not all instances were impacted. Running Nebula instances that were affected by the outage were shut down, then restarted again after we finished maintenance.

Nebula is available.
No additional maintenance is needed for Tuesday, November 21.

 
2017-11-16 16:462017-11-20 12:40NCSA JIRAJIRA wasn't importing some email requests properly after the NCSA MySQL restart.Some email sent to JIRA via help+ addresses wasn't being imported.JIRA is now accepting email and all email sent while it was broken has now been imported as expected. 
2017-11-16
08:30
2017-11-16
13:30

BW LDAP Master

(Blue Waters)

Scheduled maintenanceUpdated LDAP lustre quotas to bytes and add archive quotas. IDDS will track and drive quota changes with acctd.Production continued w/o interruption. BW LDAP master was isolated, lustre quotas changed to bytes with the addition of archive quotas. Replicas pulled updates w/o error. 
2017-11-16 14:302017-11-16 16:52Internal website (MIS Savanah)A database table used by MIS tools became corrupted.The website would become unresponsive every time the corrupted database table was accessed.OS kernel and packages where updated during debugging. The MIS database table was restored and the website came back online. 
2017-11-16 16:462017-11-16 16:48NCSA MySQLThe NCSA MySQL server had to be restarted in order to delete the corrupted table used by MIS.All services that use MySQL were down during the outage. This includes: Confluence, JIRA, RT, and lots of websitesMySQL was restarted successfully. 
2017-11-16 08002017-11-16 1200LSSTMonthly OS updates, plus first round of Puppet technical debt changes (upgrading to best design & coding practices)

All systems unavailable from 0800 - 1000 hrs.

GPFS unavailable from 0800 - 1000 hrs.

PDAC systems unavailable from 0800 - 1200 hrs.

Completed. OS kernel and package updates. Slurm upgrade to 17.02.

 
2017-11-15 13:302017-11-15 15:10RSA Authentication ManagerRSA Authentication Manager were patched to fix cross site scripting vulnerabilities and other fixesNothing was affected by the updateRSA Authentication Manager is running 8.2 SP1 P6. Process worked as expected. 
2017-11-15 - 13:302017-11-15 - 14:30BW 10.5 Firewall Upgrade Part 2The normal active, "A" unit, NCSA BW 10.5 Firewall will be upgraded and then normal fail-over status will be re-enabled.The possibility of connection resets when the A unit comes back from being upgraded and state is being sycned.Completed, process worked as expected. 
2017-11-14 11:272017-11-14 11:33LDAPLDAP was unresponsive to requests.Several services hung while authentication was unavailable.LDAP services were killed and restarted. 
2017-11-05
02:15
2017-11-06
17:11
ROGER Hadoop/Ambaricg-hm12 and cg-hm13 took minor disk failures which crashed the nodeAmbari was effectively off-linerebooted node, and node ran fsck as part of its startup sequence, node booted properly 
2017-10-31
17:22

2017-11-03
17:00

ROGER hadoop/ambarihard drive failures on cg-hm10 and cg-hm17certain ambari services and HDFScg-hm17 returned to service after power cycle and reboot, cg-hm10's hard drive didn't respond to a reboot 
2017-11-11 16:582017-11-11 19:09Blue WatersWater leak from XDP4-8 causing high temperature to c12-7 and c14-7. EPO on c12-7 and c14-7.Scheduler was paused to place system reservations on compute nodes in affected cabinets, then resumed. 
2017-11-10 14:002017-11-10 14:45NCSA Open SourceUpgrade of the following software: Bamboo, JIRA, Confluence, and BitBucketUpdates will happen in place and will result in minimal downtime of components.completed, minimal interruption of service 
2017-11-10 - 08:002017-11-10 - 08:30CA Firewall Upgrade - B unitthe stand-by, "B" unit, NCSA Certificate Service Firewall will be upgraded to same version as A unit.Expect no impact to services completed, no interruption of service 

2017-11-08

16:30

2017-11-08

17:30

Netdotnetdot.ncsa.illinois.edu was migrated to Security's VMware infrastructure.During the downtime users weren't able to activate or deactivate their network connections via Netact.Migrated successfully. Netdot is up and running. 
2017-11-08 06:002017-11-08 15:00ITS vSphere vCenterITS vSphere was upgraded to the latest version of VMware vCenter. New access restrictions were also be put into place.All VMs remained online during the maintenance, but management through vCenter was offline during the upgrade. Upgrade completed successfully. 
2017-11-08 09:302017-11-08 10:00BW 10.5 Firewall Upgrade Part 1the stand-by, "B" unit, NCSA BW 10.5 Firewall will be upgraded and then traffic redirected through it for load testing before the "A" unit is upgradedExpect no impact to servicesUpgrade completed successfully. Some states were reset when traffic switched to the B unit. 
2017-11-07 7:002017-11-07 18:37iForge

quarterly maintenance

Update OS image.
Update GPFS to version 4.2.3-5
Redistribute power drops.
Update TORQUE.
BIOS updates.

iForge (and associated clusters)

All production systems are back in service

 
2017-11-07 - 13:302017-11-07 - 15:00CA Firewall Upgrade Part 2The normal active, "A" unit, NCSA Certificate Service Firewall will be upgraded and then normal fail-over status will be re-enabled.The possibility of connection resets when the A unit comes back from being upgraded and state is being sycned.Completed upgrade 
2017-11-06 15:282017-11-06 15:53Blue WatersEPO happened to c12-7 and c14-7.HSN quiesced.Scheduler was paused to place system reservations on compute nodes in affected cabinets, then resumed. 
2017-11-03 16:212017-11-03 16:32LDAPLDAP was unresponsive to requests.Several services hung while authentication was unavailable. LDAP services was killed and restarted. 
2017-11-02 09:002017-11-02 16:00LSSTLSST had a GPFS server that was down and had failed over to the other server for NFS.The GPFS client’s failed over automatically, and we manually failed over the NFS in the morning.NFS exports were moved to an independent server. IBM was at NCSA and is continuing to debug the problems. 
2017-10-31 17:112017-11-01 11:13LSSTGPFS degraded/outage

most NCSA-hosted LSST resources experienced degraded GPFS performance

hosts with native mounts (PDAC) experienced an outage

A deadlock at 17:11 yesterday temporarily caused slow performance. Then one GPFS server went offline at 18:21 and services failed over. NFS mounts (qserv/sui) were reported as hanging by a user at 09:12 today but may have been degraded over night. Affected nodes were rebooted and NFS mounts recovered by 11:13. IBM is onsite diagnosing issues with the GPFS system and ordering repairs (including a network card on one server). 
2017-10-31 15:302017-10-31 16:00LSSTGPFS outage

most NCSA-hosted LSST resources

native mounts (e.g., lsst-dev01, verify-worker*) and NFS mounts (e.g., PDAC)

All disks in the GPFS storage system went offline temporarily and came back online by themselves. NFS services were restarted. Client nodes all recovered their mounts on their own. Logs have been sent to the vendor for analysis. 
2017-10-31 - 13:302017-10-31 - 14:30CA Firewall Upgrade Part 1the stand-by, "B" unit, NCSA Certificate Service Firewall will be upgraded and then traffic redirected through it for load testing before the "A" unit is upgradedExpect no impact to servicesUpgrade completed successfully. Some states were reset when traffic switched to the B unit. 
2017-10-30 18:362017-10-31 00:46LSSTGPFS outage

most NCSA-hosted LSST resources

native mounts (e.g., lsst-dev01, verify-worker*) and NFS mounts (e.g., PDAC)

GPFS servers were rebooted. lsst-dev01 and most of the qserv-db nodes were also rebooted. Native GPFS and NFS mounts were recovered. May have been (unintentionally) caused by user processes but will continue to investigate.. 
2017-10-25 22:002017-10-26 11:20LSSTfull/partial GPFS outage

full outage for GPFS during 22:00 hour on 2017-10-25

outage for NFS sharing of GPFS (for qserv, sui) continued through the night

full outage for GPFS recurred 2017-10-26 around 08:44

All GPFS services and mounts have been restored. 
2017-10-26 09:042017-10-26 09:04Various buildings across campus, including NPCF and NCSAIssue with an Ameren line from Mahomet caused a bump/drop/surge in power that lasted 2msLSST had approximately 20 servers at both NPCF and NCSA buildings rebootWas a momentary issue with minimal effect to most systems 
2017-10-26 00:002017-10-26 08:00ICCPgpfs_scratch01 was filled by a very active userAdditional space in scratch wasn't availableOut of cadence purge was run to free 2TB, users jobs held in scheduler; user contacted 
2017-10-25 06:002017-10-25 14:05Blue WatersSecurity Patching of CVE-2017-1000253 security vulnerability.Restricted access to logins, scheduler and compute nodes. HPSS and IE nodes are not affected.System was patched. Logins hosts are made available at 9am. The full system is returned to service at 14:05. 
2017-10-24 09:502017-10-24 20:10LSSTNetwork outage / GPFS outage

All LSST nodes from NCSA 3003 (e.g., lsst-dev01/lsst-dev7) and NCPF (verify-worker, PDAC) that connect to GPFS (as GPFS or NFS) lost their connections.

All LSST nodes at NPCF lost network during network stack troubleshooting and replacement of 3rd bad switch.

A 3rd bad switch was discovered and replaced. All nodes have network and GPFS connectivity once again. 
2017-10-23 08:002017-10-24 05:00Campus ClusterCampus Cluster October maintenance.Total outage of the cluster.Replaced core ethernet switches from share services pod. Run new ethernet cables for share services pod. Moved DES rack from share services pod to ethernet only pod. Deployed new image with patched. 
2017-10-21 17:15

2017-10-23 17:45

LSSTFirst one then two public/protected network switches went down in racks N76, O76 at NPCFMostly qserv-db[11-20] and verify-worker[25-48]; there was also shorter outage for qserv-master01, qserv-dax01, qserv-db[01-10], all of SUI, and the rest of the verify-worker nodes.Two temporary replacement switches were swapped in. Maintenance and/or longer-term replacement switches is being procured for the original switches. 
2017-10-18 13:002017-10-18 14:00NetworkingReplaced a linecard in one of our core switches due to hardware failure.Any downstream switches were routed through the other core switch.All work was completed successfully. 
2017-10-19 08:002017-10-19 21:30LSSTOutage and migration of qserv-master01: provisioning of new hardware, copying of data from old server to new.qserv-master01 (and any services that depend on qserv-master01, which may include services provided by qserv-db*, qserv-dax01, and sui*)

UPDATE (2017-10-19 15:15) OS install took much longer than anticipated, completed at 15:00. Data sync is started. Extending outage till 22:00.

Completed

 
10-19 08:002017-10-19 12:00LSSTRoutine patching and reboots, pfSense firmware updates (NPCF), Dell server firmware updates (NPCF).All NCSA-hosted resources except for Nebula.Maintenance completed successfully. (qserv-master migration is ongoing, see separate status entry) 
2017-10-18 14:452017-10-18 15:35Campus ClusterRestart of resource manager failed after removing all block array jobs.Job submissionOpened case with Adaptive (#25796). Found more array jobs and bad jobs in jobs directories. Removed all of those. 
2017-10-15 08:152017-10-15 08:30Open SourceEmergency upgrade of Atlassian Bamboo.Bamboo will be down for a few minutes during this outage window.Bamboo upgraded to the latest version. 
2017-10-14 22:152017-10-14 23:35Campus ClusterScheduler crashJob submissionOpened case with Adaptive, run diag and uploaded the output along with the core file. Restarted the moab. 
2017-10-14 13:002017-10-14 15:23Campus ClusterResource manager crashJob submissionApplied patch from Adaptive, which help with faster recovery. Suspend/block all current and new array jobs until we have a resolution. 
2017-10-06 09:002017-10-11 01:00NebulaGluster and network issues

1) Gluster sync issues continue from 2017-10-05's Nebula incident.
2) At approximately 2017-10-06 16:10, a Nebula networking issue (unrelated to the Gluster issues) occurred resulting in host network drops within the Nebula infrastructure. This internal networking incident resulted in additional gluster and iscsi issues.
Many instances are broken because iSCSI is broken from the Nebula network issues. And any instances that were broken because of gluster are still broken.

All instances have been restarted and are in a state for admins to run. Some mounted file systems might require a fsk to verify. If there are other issues please send a ticket.

As the file system continues to heal we may see slower interaction.

 
2017-10-10 16:302017-10-10 19:10Campus ClusterResource manager crashJob submissionAfter removing problematic jobs from queue and we were able to restart the RM. Opened the case with Adaptive and forwarded those job scripts and core files. 
2017-10-05 14:002017-10-05 17:00NebulaGluster sync issuesOne of the gluster storage servers within Nebula had to be restarted.Approximately 100 VM instances experienced IO issues and were restarted. 
2017-10-06 08:002017-10-06 17:00NCSA direct peering with ESnet

A fiber cut between Peoria and Bloomington caused our ESnet direct peering to go down.

All traffic that would have taken the ESnet peering rerouted through our other WAN peers. As such there were no reported outages of connectivity to resources that users would normally access via this peeringThe fiber cut has been repaired and the peering has been re-established. 
2017-10-06 08:002017-10-06 10:00LSSTKernel and package updates to address various security vulnerabilities, including the PIE kernel vulnerability described in CVE-2017-1000253. This will involve an upgrade to CentOS 7.4 and updates to GPFS client software on relevant nodes.All NCSA-hosted LSST resources except for Nebula (incl. LSST-Dev, PDAC, and verification/batch nodes) will be patched and rebooted.Maintenance completed successfully. Pending updates to a couple of management nodes (adm01 and repos01) and one Slurm node that is draining (verify-worker11). 
2017-10-4 07:402017-10-4 09:55Campus ClusterResource Manager crashJob submissionFailure on initial restart attempt. After looking through the core, decided to try a restart again without any change. This time it worked. 
2017-10-03 13:002017-10-03 19:00Campus ClusterResource Manager crashJob submissionAfter removing ~30 problematic jobs from queue and we were able to restart the RM. Opened the case with Adaptive and forwarded those job scripts and core files. 
2017-09-21 02:572017-09-21 09:40Storage server (AFS, iSCSI, web, etc)

The parchment storage server stopped responding on the network.

 

  • Several websites were down, including the following: www.ncsa.illinois.edu, cybergis.illinois.edu, nationaldataservice.org, etc
  • iSCSI storage mounted to fileserver went offline.
  • Several AFS volumes, including some users' home directories were offline.
Replaced optical transceiver on the machine and networking restarted. Also updated kernel and AFS. 
2017-09-20 08:002017-09-20 13:45Campus ClusterSeptember MaintenanceTotal cluster outageMaintenance completed successfully. 
2017-09-20 08:002017-09-20 11:30 NCSA Storage Condo
Normal maintenance --Firmware upgrade on Netapps so new disk trays could be attached for DSILtotal file system outageThe quarterly maintenance was complete 
2017-09-18 11:202017-09-18 13:30Active Data StorageRAID Failure in NSD server and disk failure on secondary NSD server.ADS service was unavailableRecovered RAID configuration on NSD server and replaced failed disk on secondary NSD. ADS restored. 
2017-09-15 06:202017-09-15 09:28public-linuxOpenAFS storage was not running or mounted after rebooting to a new kernel.AFS storage was not available from this serverReinstalled the dkms-openafs package restarted the openafs-client. AFS is now working as expected. 
2017-09-10 09:452017-09-10 11:30NCSA Open SourceUpgrade of Bamboo, JIRA, Confluence, BitBucket, FishEye, CrowdDuring the upgrade the services will be unavailable for a short amount of time.All services upgraded successfully. 
2017-08-31 11:072017-08-31 11:11NCSA LDAPNCSA LDAP TimeoutsNCSA LDAP was overloaded and timing out. Users were not able to authenticate via NCSA LDAP during that time.NCSA LDAP stopped timing out at 11:11 am and authentication resumed. 
2017-08-28 11:552017-08-28 12:59NCSA GitLabNCSA GitLab server ran out of disk space for the OSThe web interface at https://git.ncsa.illinois.edu wasn't workingWeb interface is now working. Space freed up by clearing CrashPlan caches. 
2017-08-24 13:002017-08-24 14:30netact.ncsa.illinois.eduTransient config issues from some system patching caused apache to not be able to start on the netact serverNetwork Activation The issues were fixed and Network Activation is working again 
2017-08-24 08:002017-08-24 15:30LSSTRack upgrades in NCSA 3003Most LSST Developer services offline during upgradeAll LSST systems are back online with new racks and switches 
2017-08-24 08:002017-08-24 09:30LSSTmonthly maintenance for NPCF (includes patching to address CESA-2017:1789 and CESA-2017:1793)adm01, backup01, bastion01, monitor01, object*, qserv*, sui*, verify-worker*, test0*Maintenance was successfully completed. 
2017-08-23 09:212017-08-23 16;50aForge/iForgegpfs failed during an upgrade of GPFS on the iforge storage nodes.  There was an IB hiccup at the time, but causality is unclearall jobs on iforge were aborted, gpfs clients needed to be upgrade, all gpfs client nodes were rebootediForge went production shortly before 5:12pm.  aforge went "production" at ~1630 
2017-08-22 20:002017-08-22 30:00Patching DHCP servicePatching OS and services on DHCP1.Will need to reboot DHCP server a few times during this process. During the time dhcp will be unavailable. This is during the evening so I don't expect any direct issues from this.Patching has been completed. 
2017-08-16 08:002017-08-16 16:00Campus ClusterAugust MaintenanceScheduler and resource manager downUpgraded Moab 9.1.1 and Torque 6.1.1. 
2017-08-16 08:002017-08-16 09:15NoneReplace Line Card in Core SwitchI believe all systems connected to this switch, are multihomed and will not experience an outage.The line was has been successfully replaced. 
2017-08-16 00:302017-08-16 02:30Blue WatersTwo cabinets (c10 & c11) had EPO due to XDP control valve failure.Scheduler was paused to isolate failing parts, resumed at 2:09.Parts replaced and cabinets were returned to service. 
2017-08-08 7:002017-08-09 3:00iforge/cfdforge/aforge

Update OS image to RH 6.9

Update GPFS to version 4.2.3-2

Redistribute power drops

All four clusters were updated.

All items on checklist completed.

20170808 Maintenance for iforge

 
2017-08-03 06:452017-08-03 07:35NCSA Jabber upgradeUpgraded Openfire XMMP jabber softwareNCSA Jabber was unavailable during the upgrade.Jabber was upgraded to the latest version of Openfire 
2017-07-28 17:002017-07-31 evening Update - All of the production data has been migrated except for the largest object table. That is loading now, then the user space will be loaded. Should all hopefully be done by this evening. Migration of operational database to new hardware happening during the weekend. DES old operational databasemigration done successfully. Some other maintenance tasks that will give DES additional disk space was done, too and some performance improvements. 
2017-07-27 11:002017-07-28 15:00netact.ncsa.illinois.edu

 The netact.ncsa.illinois.edu network activation server VM needed to be restored from backup

Network Activation serviceThe service has been fully restored 
2017-07-25 02:362017-07-25 18:00Campus Cluster / Scheduler downBlip on mgmt1 causing GPFS drop and scheduler to crashScheduler offlineStill taking long time for Scheduler to initialize but jobs can start and run as usual. Opened case with Adaptive. 
2017-07-20 09:002017-07-20 17:00ROGER Ambari and OpenStackUpdates to openstack control node and the Ambari clusterAmbari nodes (cg-hm08 - cg-hm18), OpenStack instances and serversOpenstack was back in service on time. Ambari had issues mounting hdfs was held out of service. HDFS was remounted on 25 July 
2017-07-20 06:002017-07-20 10:00All NCSA hosted LSST resourcesMonthly OS patches (addressing issues including CESA-2017:1615 and CESA-2017:1680). Roll-out updated puppet modules. Batch nodes updated firmware.All nodes in NCSA 3003 and NPCF (batch nodes) will reboot.Overall success. Exceptions: verify-worker31 failed a firmware update and is out of comission (LSST-914) and there are connectivity issues for some VMs used by the NCSA DM team (IHS-365). adm01, backup01, and test[09-10] will be patched in the near future. 
2017-07-19 08:002017-07-19 14:44Campus ClusterJuly Maintenance (applied security patch)Cluster wide, except mwt2 nodesApplied new kernel, glibc, bind patches and newest NVIDIA driver. 
2017-06-29
1800
2017-06-30 0000Blue WatersEmergency maintenance to apply security patch addressing Stack Guard security vulnerability.Compute, Login, Scheduler are offline.Kernel and glibc library patched on all affected system. 
2017-06-22 08002017-06-22 1200All NCSA hosted LSST resourcesCRITICAL kernel and package updates to address Stack Guard Page security vulnerability.

Systems will be patched and rebooted.

Outage was extended to last past 1000 until 1200. Systems were successfully patched as planned except for qserv-db12 and qserv-db27, which will not boot. We will follow up on those with a ticket. 
2017-06-22 08002017-06-22 0930LSST cluster nodes (verify-worker*, qserv*, sui*, bastion01, test*, backup01)Deploy Unbound (local caching DNS resolver)DNS resolving may have a short (~30 mins) delay. Successfully deployed and all tests (including reverse DNS and intra-cluster SSH) pass. 
2017-06-20
0930
2017-06-20
1100
BluewatersXDP shutting down causing EPO on cabinet c1-7 and c2-7.Scheduler was paused to isolate the failing components, then resumed.Warmswap of failing components, and returned them to service.  

2017-06-20

0900

2017-06-20

1000

NCSA Open Source

Security upgrade needed for Bamboo, will also update the following components: Bamboo, JIRA, Confluence, BitBucket, FishEye

Most of the subcomponents of NCSA opensource will be down for a short time when the software is updated.Upgraded Bamboo, JIRA, Confluence, BitBucket, FishEye to latest versions 

2017-06-16

0900

2017-06-16

1100

ROGER Openstack nfs backend failed and was restartedThe primary CES server for the openstack backend failed and tried to fail over to the secondary server, which also failed. SET was notified and they had the CES nfs service back up by 1100The RoGER openstack dashboard went down and needed a restart. Several VM's experienced "virtual drive errors" and will need to be restartedSET is still investigating the cause of the GPFS CES service failover. CyberGIS is working with their users to get the affected VM's restarted 
2017-06-15 08002017-06-15 0930LSST cluster nodes (verify-worker*, qserv*, sui*, bastion01, test*, backup01)Deploy unboundDNS resolving may have a short (~30 mins) delay.

Updates deployed successfully via new puppet module. All tests passed.

EDIT 2017-06-15 1500 - Reverse DNS not working, which broke ssh to qserv* nodes. Disaabled unbound.

 

6/14/2017

8:00 a.m.

6/14/2017

10:00 p.m.

Network Core SwitchNetwork Engineering will be replacing a line card in one of our Core switches due to hardware issue.All services should remain active. Any affected switch will have a second redundant link to the other core to pass traffic.Line card was successfully replaced. 
2017-06-08 12:002017-06-11 22:20Campus Cluster (scheduler paused)Disk Enclosure 3 failure on DDN 10K.Lost redundancy and force us to drain the cluster.Repair/replacement for controller can be time consuming so we took action to rebalance data out of failed enclosure. Scheduler was resumed as of 22:00. 

2017-06-07 12:07

2017-06-07 12:42NCSA LDAPThe NCSA LDAP service crashedNCSA LDAP service was unavailableLDAP software and OS were updated and server rebooted. LDAP is working normally. 
2017-05-31 20:062017-05-31 20:36NCSA LDAPThe NCSA LDAP service was timing outNCSA LDAP service was unavailableThe root cause of LDAP timeouts is still being investigated. 
2017-05-222017-05-26Campus Cluster VMsNetwork issue ESXI (hypervisor) Boxes after maintenanceCould no longer able to login to start VMs. License Server, nagios, all MWT2 VMs were down

The issue is fixed on 5/24. Restored license and Nagios service on 5/24. Moved MWT2 VMs to Campus Farm. All VMs return to service as of noon 5/26.

 
5/12/20175/18/2017Condo/NFS partitions onlythe NFS partition for the condo became extremely unstable after a replication (normal daily maintenance) was completed. Many iterations with FSCK and IBM on the phone got it resolved, and then 1.5 days restoring files that had been put in Lost and found.UofI library was switched to the READONLY version on the ADS during this timeThe root cause is still being investigated. 
2017-05-23 14:052017-05-23 14:13NCSA LDAPThe NCSA LDAP service was timing outNCSA LDAP service was unavailableThe issue is still being investigated, but seems to be steadily available since the incident. 
2017-05-22 15:412017-05-22 15:51idp.ncsa.illinois.edu
oa4mp.ncsa.illinois.edu
Apache Tomcat out of memoryInCommon/SAML IdP and OIDC authentication services were unavailable.Service restored by failing over to secondary server while memory is being increased on primary server. 
05/20/2017 21:09

05/20/2017 23:37

DES nodes on Campus ClusterCould not communicate outside the switchAll nodes connected to switch in POD22 Rack2 @ACBUpgraded the code on the switch resolved the issue. 
05/20/2017 05:0005/20/2017 21:09Campus Cluster and Active Data Storage (ADS)Total power outage at ACBAll systems currently reside at ACB

Power was restored around 13:00hrs. We rotated ADS rack to align with Campus Cluster Storage Rack. Changed couple of VLAN IDs to reflect campus for future merger. ESXI boxes are down due to a configuration error after reboot. No major issue from output of FSCK from scratch02.

 
05/17/2017 02:0005/17/2017 10:45Internet2 WAN connectivityIntermittent WAN connectivity. The outage was a result of Tech Services' DWDM system, which provides us with our physical optical path up to Chicago via the ICCN. Specifically, the Adva card that our 100G wave is on was seeing strange errors, which was causing input framing errors for traffic coming in on this interface.General WAN connectivity to XSEDE sites, certain commodity routes, and other I2 AL2S connections.The Adva card was rebooted and we stopped seeing the input framing errors. Tech Services is working with Adva to find the root cause of the issues on the card. 
5/11/20175/12/2017ESnet 100G connectionNCSA and ESnet will be moving their 100G connection to a different location in Chicago.We have several diverse high speed paths to ESnet and DOE, traffic will be redirected to a secondary path.  
2017-05-11
06:45
2017-05-11
07:33
NCSA Jabber upgradeUpgraded Openfire XMMP jabber softwareNCSA Jabber was unavailable during the upgrade.Jabber was upgraded to the latest version of Openfire 

2017-05-09

07:00

2017-05-09

18:15

iForge, GPFS, License ServersiForge Planned MaintenanceiForge systems, including the ability to submit/run jobs.Pm was completed early at 1815 
2017-05-06 22:002017-05-06 23:00NCSA Open SourceUpgrades of Atlassian softwareNCSA Open Source BitBucketBitBucket is upgraded. 
2017-05-06 09:002017-05-06 10:00NCSA Open SourceUpgrade of Atlassian SoftwareMost services hosted at NCSA Open Source were down for 5 minutes during rolling upgrades.The following services were upgraded: HipChat, Bamboo, JIRA, Confluence, FishEye and CROWD. 
2017-05-05 17:432017-05-05 20:02ITS vSphereA VM node panickedSeveral VMs died when the node panicked and were restarted on other VM nodes. This included LDAP, JIRA, Help/RT, SMTP, Identity, and others.All affected VMs were restarted on other VM nodes. Most restarted automatically. 
2017-04-27 18:102017-04-27 18:55Campus ClusterAnother GPFS interruptionBoth Resource Manager and Scheduler went down along with hand full of compute nodes.Restarted the RM and Scheduler and rebooted all down nodes. 
2017-04-27 13:112017-04-27 14:20Nebulaglusterfs crashed due to this bug, so no instances could access their filesystemsAll instances running on NebulaNeeded to reboot the node that systems were mounting from, but took the opportunity to upgrade all gluster clients on other systems while waiting for a reboot. Version 3.10.1 fixes the bug. All instances with errors in their logs were restarted. 
2017-04-27 11:202017-04-27 12:45Campus ClusterGPFS interruptionBoth Resource Manager and Scheduler went down.Torque serverdb file was corrupted. Restore the file from this morning snapshot and modified the data to match the current state. 
2017-04-26 12:002017-04-26 18:30CondoA bug in the delete of a disk partition from GPFS. a problem within GPFSDES, Condo partitions, and UofI Library.Partitions had been up for 274 days, and many changes. The delete partition bug caused us to stop ALL operations on the condo and repair each disk through GPFS. Must have quarterly maintenance. Just too complicated to go a year without reseting things. 
2017-04-19 16:542017-04-20 08:45gpfs01, iforge

Filled-up metadata disks on I\O servers caused failures on gpfs01.

iforge clusters, including all currently running jobs.

Scheduling on iForge was paused for the duration of the incident. Running jobs were killed.13% metadata space was freed. Clusters were rebooted and scheduling resumed.

 
2017-04-19 08:002017-04-19 13:00Campus ClusterMerging xpacc data and /usr/local back to data01 (April PM)Resource manager and Scheduler were unavailable during the maintenance.Once again, /usr/local, /projects/xpacc and /home/<xpacc users> are mounting from data01. No more split cluster. 
2017-04-04 (1330)2017-04-04 (1600)NetworkingSome fiber cuts caused a routing loop inside one of the campus ISP's network.Certain traffic that traversed this ISP would never make the final destination. Some DNS lookups would have also failed.Campus was able to route around the problem, and the ISP also corrected their internal problem. The cut fiber was restored last night. 
2017-03-28 (0000)2017-03-29 (1600)LSSTNPCF Chilled Water OutageLSST - Slurm cluster nodes will be offline during the outage. All other LSST systems are expected to remain operational.No issues. Slurm nodes restarted. 
2017-03-28 (0000)2017-03-29 (0230)Blue WatersNPCF Chilled Water OutageFull system shutdown on Blue Waters (except Sonexion which is needed for fsck)FSCK done on all lustre file systems, XDP piping works done (no leakage found), Software updates (PE, darshan) completed. 
2017-03-25
10:15PM
2017-03-26
00:08AM
Blue WatersBW scratch MDT failover, df hangsBW scratch MDT failover, load on mds was 500+ delayed failover. Post FO had some issues that delayed RTS.scheduler was paused 
2017-03-25
4pm
2017-03-25
8Ppm
Blue WatersBW login node ps hangrebooted h1-h3, lost bw/h2ologin DNS record, had neteng recreate the record. Had to rotate login in and out of round-robins until all rebooted. User email sent (2).Logins nodes rebooted
DNS round-robin changes
 
2017-03-23 (1000)2017-03-23 (1500)NebulaNCSA Nebula OutageNebula will take an outage to balance and build a more stable setup for the file system. This will require a pause of all instances, and Horizon being unavailable.File system online and stable. At this time all blocks were balanced and healed. 
2017-03-16 (0630)2017-03-16 (1130)LSSTLSST monthly maintenanceGPFS filesystems will go offline for entire duration of outages. Some systems may be rebooted, especially those that mount one or more of the GPFS filesystems.  
2017-03-15
15:11 
2017-03-15
16:01 
Blue WatersFailure on cabinet c9-7, affecting HSN.Filesystem hung for several minutes.Scheduler was paused for 50 minutes.
Warmswap cabinet c9-7.
Nodes on c9-7 are reserved for further diagnosis.  
 
2017-03-15 09:002017-03-15 12:47Campus ClusterUPS work at ACB.Reshuffling electrical drops on 10k controllers, storage IB switches and some servers.Scheduler will be paused for regular jobs. MWT2 and DES will continue run on their nodes.UPS work at ACB - incomplete (required additional parts)Redistributing power work done.Scheduler was paused for 3hrs 50 mins. 
2017-03-10 13:002017-03-10 18:00Campus ClusterICCP - We lost 10K controllers due to some type of power disturbance at ACB.ICCP - Lost all filesystem and its a cluster wide outage.Recovered missing LUNs and rebooted the cluster. Cluster was back in service at 18:00. 
2017-03-09 09002017-03-09 1500RogerROGER planned PMbatch, hadoop, data transfer services & Ambarisystem out for 6hrs, DT services out until 0000 
2017-03-08 19:412017-03-08 22:41Blue WatersXDP powered off that served the four cabinets
(c16-10, c17-10, c18-10, c19-10).
scheduler paused, four rack power cycled.
moab required a restart, too many down nodes
and itterations were stuck.
Scheduler paused
three hours
 
2017-03-03 17002017-03-03 2200Blue WatersBW hpss emergency outage to clean
up db2 database
ncsa#nearline, stores are failing with cache fullResolved cache full errors 
2017-02-28 12002017-02-28 1250Campus ClusterICC Resource Manager downUser can't submit new jobs or start new jobsRemove corrupted job file 
2017-02-22 16152017-02-221815NebulaNebula Gluster IssuesAll Nebula instances paused while gluster repairedNebula is available. 
2017-02-11 19002017-02-11 2359NPCFNPCF Power HitBW Lustre was down, xdp heat issues.RTS 2017-02-11 2359 
2017-02-15 08002017-02-15 1800Campus ClusterICC Scheduled PMBatch jobs and login nodes access