Child pages
  • NCSA Status Home
Skip to end of metadata
Go to start of metadata


Watch this page in the wiki to subscribe to automatic updates to this status page.

Please do not refer to any NCSA Industry Partners on this page. Please use the iforge nomenclature for all of the *forge infrastructure.

Current Status  

ENDWhat System/Service is affectedWhat is happening?What will be affected?Contact Person

Report a problem

Upcoming Scheduled Maintenance

StartEndWhat System/Service is affectedWhat is happening?What will be affected?Contact Person
2019-04-02 09002019-04-02 1100CILogon ( PHP from v5.6 to v7.3No downtime is

2019-04-09 0900

2019-04-09 0930CILogon (,, tfca.ncsa.illinois.eduDeploy new Luna SA HSM (hsm5) to production and take one old HSM (hsm4) offline (to serve as emergency backup).No downtime is expected. Use instructions at SafeNet LunsaSA HSM Monthly Testing to change pool of available HSMs on \{warm,cool,tepid\}

Previous Outages or Maintenance

StartEndWhat System/Service was affected?What happened?What was affected?Outcome

Contact Person

20190318 - 140020190318 - 1500BW Nearline EndpointScheduled HPSS software patch roll-upAccess to BW Nearline endpoint is suspededPatch installation
2019-03-12 07:002019-03-13 17:45LSST - LSST dev/Slurm compute nodes

network testing

24 compute nodes were reserved for admin use for this testingtesting was extended into the 13th but was completed and nodes have been returned to
2019-03-12 13:252019-03-12 14:25LSST

public DNS names were inadvertently removed for LSST's Oracle servers/service and the service became unavailable

LSST Oracle servers/service
  • DNS was completed restored by 14:25
  • slowness following return to service was initially reported by one user but this seems to have resolved itself
2019-03-09 22:352019-03-09 22:35LSSTPower sag caused 27 L1 "NCSA test stand" nodes to reboot27 L1 "NCSA test stand" nodesServers rebooted
2019-03-09 09:562019-03-09 10:31NCSA Jira, Pop, File-serverA VM host kernel panicked, causing its VMs to restart on alternate hosts.Jira, pop mail server, and file-server servicesVMs automatically restarted
2019-03-08 06:152019-03-08 06:45NCSA Storage CondoThere was an IB error on the storage network causing the core servers to lose connectivity to disk.NFS/GridFTP/Remote Cluster MountsThe node with the IB issue has been temporarily removed from service and will be placed back in when
06-Mar-2019 8am (CST)06-Mar-2019 9am (CST)All services behind pfsense firewall at NCSA. (qserv, verify, lsp, oradb)pfsense network config update to stage 'k8s-prod' deployment. Requires failover of firewall, and may cause short (~60s) outage of systems behind the firewall.All services behind pfsense firewall at NCSA. (qserv, verify, lsp, oradb)Complete

2019-03-04 2:00 pm2019-03-04 2:08pmNAPS

IDDS will be applying several updates to the NCSA Allocations Processing Service (NAPS):

(1) Searches for logins will only find those logins for the current domain

(2) Logins will always be created for the same organization as the domain (instead of always creating an NCSA login)

(3) Valid login rules will check the rules for the organization of the current domain

(4) Bug fix to make sure int args to procedures are passed as ints, not strings

(5) Speed up project loading process

(6) Dynamically determine compute resources

(7) Correct information in confirmation message when terminating a user from a project

(8) When selecting allocation for new users, only show the most current allocations for each resource
2019-03-01 09:002019-03-01 09:33aForgeMultiple Ambari Services were in an error state. Individual service starts would fail.Job submission was downCluster was
2019-02-27 23:002019-02-28 08:23NCSA VPNCampus moved Duo to a different instance (off of DUO1) to improve performance and reduce future downtime.  NCSA Duo is bundled with campus Duo and is also affected.  The vendor has completed changes but additional work appears needed on the NCSA VPN to accommodate this change.NCSA VPN was not working with Duo push - entering the 6 digit passcodes generated by the Duo app can be used as a work-around.NCSA VPN is now working for both push and passcodes.

2019-02-27 23:002019-02-27 23:59Any system using Duo authenticationThe vendor moved us to a different Duo instance (off of DUO1) to improve performance and reduce future downtime.

Anyone who has a current session will not be impacted, it will only be for people trying to auth into a new session.   We expect Duo to be up most of this change window and actual downtime to be minutes.  All systems using Duo are affected including:

Vendor has completed work and most systems appear to be functioning, however it appears some local changes are needed for the NCSA VPN - see separate posting.

2019-02-22 06:302019-02-22 07:00ICCP WAN

This morning during a routine generator transfer test, one of the UPS units in a Tech Services networking node, node-1, failed resulting in a loss of power to portions of node-1.  Network Engineers were on-site during the test and were able to quickly resolve all issues stemming from that loss in power.  Not all equipment hosted in node-1 was impacted but one of the campus core routers, equipment hosting the science DMZ (CARNE and thus ICCP WAN as a whole) and other parts of the ICCN (Inter-Campus Communication Network) were impacted. 

All networking in and out of ICCP was down. Intra-cluster networking within ICCP was not affectedICCN network engineers resolved the issues and things came back up
2019-02-21 10:002019-02-21 14:00ICCPmoab core dump during startup.No one can submit job and no new jobs will start.Able to restart moab after removing all checkpoint
2019-02-21 08:002019-02-21 12:00LSST

Monthly maintenance

  • OS/Yum updates
  • Switch maintenance in NPCF N73 & P73
  • pfSense update & port negotiation change
  • GPFS server updates
  • Firmware updates for Dell C6420s

ALL LSST systems, including:

  • lsst-dev01, lsst-xfer, etc.
  • PDAC, verification, and Kubernetes clusters
  • tus-ats01

Maintenance was successfully completed with one pending issue:

  • monitoring hosts (lsst-int-monitor; monitor-ncsa) are not showing status information due to problem reaching InfluxDB resolved
2019-02-21 09:20 AM2019-02-21 09:26 AMServices using DUOThe DUO1 deployment experienced a load balancer failure resulting in 100% of authentication requests failing to complete.All systems using Duo were affected including:This issue was identified and resolved via automated remediation by the vendor.  See for
2019-02-18 01:31 PM2019-02-18 05:05 PMICCPMoab was crashing after a few minutes of starting.Jobs could be submitted, but would not start.Moab was restarted with no additional commands run (showconfig, etc.). This allowed Moab to properly index the job database. After completion, the scheduler was stable
2019-02-18 9:00 AM2019-02-18 11:00 AMLSST - K8sSecurity update of Docker and Kubernetes packages to address CVE-2019-5736Qserv, All LSST services running in K8s.Patching completed on time (10:00 AM). Additional troubleshooting of lsp-stable & lsp-int indirectly related to
2019-02-15 1:15 PM2019-02-15 about 1:45 PMSome internet connectivity

ICCN router card crashed. Some commodity internet traffic was affected during the timeframe listed.

Commodity traffic to/from NCSA.This has been
2019-02-13 17:002019-02-13 21:00netdot.ncsa.illinois.eduNetEng will be migrating Netdot to a new platform.Users will not be able to login into the NetDot IPAM and make/view DNS entries. The DNS servers will remain available throughout the window.This has been
2019-02-10 11:40am

2019-02-12 11:50am

ICCPController failed that caused an interruption with the redundant controller, have a new enclosure in place, waiting on valid second controller still. Cluster has returned on one controller after FSCK came back clean on the file systemShared file systems on cluster were unavialableAfter force verifying the Pools, running FSCK on file system, swapping enclosure, file system returned to service. New controller successfully installed on 02/13; opened PMR with IBM on FSCK
2019-02-11 11:002019-02-11 17:50IDDS job processing

We will be doing a correction to a large number of Blue Waters job records in the IDDS database.
This process will begin at 11am and is expected to last around 6-7 hours.

There will be a small interruption to real time job loading for Blue Waters that should last around 1 hour.
Although there should be little impact to other systems, database access to the jobs table might be sluggish.
2019-02-10 21:002019-02-11 09:15NCSA Open Sourcekernel crashed. proxy server is down resulting in all of NCSA Open Source services being unreachable

NCSA OpenSource: JIRA, WIKI, BAMBOO, Confluence

physical reboot of server resolved
2019-02-08 13:002019-02-08 17:30

BlueWaters HPSS

ncsa#Nearline globus service

HPSS core server encountered a bug and crashed

Vendor is installing a patch to the core hpss server. 

Anticipating the system will be returning to service by 17:20

BlueWaters HPSS storage

Globus transfers to/from ncsa#Nearline

Vendor installed a patch

HPSS and ncsa#Nearline were returned to service

5:00 AM
5:30 PM
ncsa#Nearline (GO)
Scheduled MaintenanceSoftware and firmware updates completed.ncsa#Nearline (GO) returned to
9:05 AM
3:14 PM
BW/SchedulerHSN issue - full reboot to recoverMainframe rebooted and all running jobs were lost.BW returned to
2019-02-05 07:002019-02-05 22:00iForge/aForgeQuarterly Maintenance (20190205 Maintenance for iForge)All systems were unavailable during the maintenance.Maintenance was successfully completed. iForge and aForge were returned to service by
2019-02-02 6:402019-02-02 10:20ICCP schedulerRoot fill up on cc-mgmt1.Both resource manager and scheduler were down

Boot the system into single user mode and gzip old messages file and moved this to GPFS.

Having issue restarting moab after that. Restart moab with clear checkpoint option and it works.
2019-01-31 06:002019-01-31 07:10NCSA ITS vSphere vCenterUpgraded ITS vSphere vCenter server to latest versionAll VMs will remained online during the maintenance, but management through vCenter was unavailable.

Upgrade complete

2019-01-30 10:00 p.m.2019-01-30 12:00 p.m.NCSA XSEDE DNS serverPerforming patching/upgrade on the ns1.xsede.orgWhile patching the DNS server will be unavailable intermittently. Backup DNS servers will remain during this time frame.Maintenance





FileserverScheduled MaintenanceShares on Fileserver were unavailable during the outage.

Maintenance complete
2019-01-18 12:142019-01-18 14:32RSA OTP user portalAn ESXi server crashed taking down several VMs it was hosting. The OTP VM rebooted on an alternate ESXi hosts.RSA OTP user portal

RSA OTP user portal online
2019-01-18 12:142019-01-18 13:30JIRA, file-server, ad-a, jabber, vsphere, email relayAn ESXi server crashed taking down several VMs it was hosting. The VMs all rebooted on alternate ESXi hosts.

JIRA, file-server, ad-a, jabber, vsphere, and email relay all rebooted

JIRA had index files corrupted and took a while to repair those

JIRA, file-server, ad-a, jabber, vsphere, and email relay rebooted and online
2019-01-17 08:002019-01-17 12:00LSST

Monthly maintenance

  • Power rebalancing in NPCF L73
  • Switch maintenance in NPCF M73, N73, P73
  • Critical security patching
  • Dell firmware upgrades
ALL LSST systems (incl. lsst-dev01, lsst-xfer, etc. as well as PDAC, verification, and Kubernetes clusters, and tus-ats01)

Maintenance was completed successfully with the following caveats:

  • lsp services in Kubernetes are not fully functional (this is carryover from before the PM; see discussion on Slack, dm-lsp-users and possibly other channels)
  • lsst-l1-cl-dmcs will not boot after firmware updates

Please open tickets if you notice other issues.





NPCF Emergency power offEmergency power off panel was energized Facility electrical and HVAC systemsPanel is
01/12/2019 8AM

01/12/2019 1PM

BW/Mainframe resourceHung threads on scratch/home, paused the scheduler, HSN requires full reboot to recover 9:30AMMainframe rebooted and all running jobs were lost.BW returned to service 1PMTimothy Bouvet



2019-01-10 5:55PMcode42 crashplan pro e services had update for dataloss bug with MS OneDriveCode42 crashplan service was updatet to latest release to fix a dataloss problem with clients also running MS One Drive.Backup services were interrupted for a few minutes while services updatedNow running Code42
2019-01-10 3:20PM2019-01-10 5:00PMDUO 2-Factor AuthDUO Upstream vendor reported issues with their service.
NCSA systems that use DUO for 2FADUO brought their systems back
2019-01-09 10:28 AM2019-01-10
3:00 PM
BW/HPSSPower event at NPCF and recovery from falloutHPSS ncsa#NearlineHPSS ncsa#Nearline RTSGlasgow, James A
2019-01-09 10:28 AM2019-01-09
4:35 PM
BW/All Resources DownPower event at NPCF and recovery from falloutAll BW Resources DownPower Restored, All Resources Except HPSS
Industry systems/ LSST systemsPower event at NPCF caused some Industry and some LSST systems to go offlineRunning jobs on iforge and other systemsThe affected systems have been returned to service and users are being notified of which jobs to rerun

NCSA office net firewallSoftware upgrade on NCSA firewall and some config changes.NCSAnet wireless, Wired network (closed and partially-closed nets). IllinoisNet wireless will remain available during the maintenance.Firewall upgrade did not go through however all services have been restored. NetEng is investigating and will work with the vendor to figure out a

01/08/2018 2:20PM



code42 crashplan pro e services had update for security issuesCode42 crashplan service was updated with the latest security fixesBackup services were interrupted for a few minutes while services updatedNow running Code42





OTP self-services site was downPower on the hypervisor running the rsa otp self-service site was lost and the service didn't restartPIN changes and new software distribution was unavailableNow running updated version of software and all functionality was
12/27/18 1:16PM12/27/18 11:40PMNPCF - 2 power blips (B transformers)System has been returned to service.Blue Waters ongoing jobs all terminated, scheduler paused while mainframe rebooted.

After an absurdly long outage to perform a reboot, the system was returned to service. There were apparently issues on shutdown, and again on bringup with various hardware fallout.

12/27/18 7:58AM12/27/18
Blue Waters/ bwedge, bwds2 - rebooted on backup server
bwdsm-dev - unresponsive? stopped respondiing/crash, we’re having intermittent issues with ESXi hosts kernel dumpingVM server went down impacting vm's on that server. VM's will restart on other backup server with temporary interruption in their was power cycled. VM's were migrated to balance the load on servers after Jack was returned to service.
2018-12-172018-12-19ICCPRemoved old file systems no longer in production; reformatted LUNs; rolling reboot of NSD servers to pick up new presentations; rebalance startedNo user impact, all services remained fully operationalNew v5 formatted disks added successfully; FS expanded to full size; rebalance of FS
2018-12-18 08:002018-12-18 10:00NPCF-EXIT-EASTThe firmware on NPCF-EXIT-EAST was upgraded.Traffic was re-routed through NPCF-EXIT-WEST during the maintenance. No impact to users was observed.Firmware was upgraded without
2018-12-12 08:002018-12-12 20:43ICCP

Monthly maintenance

  • cutting the cluster over to new Spectrum Scale v5 formatted file system

Total cluster outage.

Taking a bit longer to bring the system back because interface renaming script stop working.
2018-12-11 08:002018-12-11 10:00NPCF-EXIT-WESTThe firmware on NPCF-EXIT-WEST was upgraded.Traffic was re-routed through NPCF-EXIT-EAST during the maintenance. No user visible outage occurred.Firmware was upgraded without
2018-12-07 08:542018-12-07 19:25ICCPACB UPS experienced fault causing storage appliance to shutdown in controlled mannerJobs halted on system due to lack of parallel file system presence.F&S was dispatched to fix put UPS in bypass, FSCK's were run on File Systems to ensure integrity and the cluster was returned to
Wired network connections in NPCF office spaceSoftware upgrade on network switcheswired network service in NPCF office space. NCSAnet, IllinoisNet Wireless remained availableMaintenance was
2018-12-06 06:002018-12-06 07:30NCSA ITS vSphere vCenterITS vSphere vCenter server was upgraded to latest versionAll VMs remained online during the maintenance, but management through vCenter was unavailable from 06:18-07:25.

Upgrade was completed successfully
2018-11-29 08:002018-11-29 14:00LSST

Monthly maintenance

  • Puppet code changes
  • disable CPU hyperthreading
  • OS/Yum updates
  • code upgrades on select service & management switches NPCF
  • pfSense updates

ALL LSST systems (incl. lsst-dev01, lsst-xfer, etc. as well as PDAC, verification, and Kubernetes clusters, and tus-ats01)

Maintenance was completed
2018-11-19 08:002018-11-19 19:30ICCP

Monthly maintenance

  • Split the filesystem
  • Reformat with new v5 format
Total cluster outage.
2018-11-15 5.30PM2018-11-15
NCSA building router in 2045software upgrade on one of the building routers (2045-br)Traffic failed over to redundant building router and no impact on network traffic was seenMaintenance was completed


Blue Waters/Home filesystemMDS issuescheduler paused
Logins impacted
Home file system RTSTimothy Bouvet
2018-11-14 10:00am

2018-11-14 11:00am

idp.ncsa.illinois.eduUpgrade Shibboleth IdP from v3.3.2 to v.3.4.1ECP (command line) Duo authentication is now supported natively by Shib IdP completed a day earlyTerrence Fleury

2018-11-14 10:45 am

2018-11-14 11:20

Blue Waters /Home filesystem

Investigation ongoing- suspect HSN quiesce/home, and new job starts during the scheduler pauseback in service at 11:20Timothy Bouvet


06:00 am

2018-11-06 12:25 pm

Networking NetSure DC Distribution System

Tape Library QBERT and DIGDUG

iForge racks:

  • Y121, Z121, AA121, CC121, DD121
De-energize distribution power panel DP-6C-020 to install new power panel PPC4

Loss of power to the core network DC Distribution panel (B Side), the network is 2N power feed, no impact on the network due to redundancy.

Loss of power to two tape libraries, a temporary power feeds will be provided.

iForge system will be powered down for quarterly maintenance.

work completed as expectedMohammad Rantissi


7:30 PM


8:30 PM



Cluster rebootMemory performance on most k8s nodes was in degraded state as a result of a power event that occurred over the weekend. Reseating the nodes in their chassis slots resolves the issue.Systems rebooted and memory performance is back to
2018-11-10 ~04:402018-11-10 ~04:45iForge (select compute nodes)

A power event caused some compute nodes to reboot

Select skylake platform compute nodes, including 7 nodes in the skylake queue. Jobs running on those nodes would have been impacted.Systems rebooted and brought themselves back
2018-11-10 ~04:402018-11-10 ~04:45LSST (lspdev and select L1 hosts)

A power event caused some hosts to reboot:

  • lspdev kubernetes cluster (3 nodes including master node did not come back on their own and were manually brought online around 09:30)
  • some L1 nodes rebooted as well

lspdev/Kubernetes cluster was unavailable from ~04:40 until ~09:30

select L1 hosts rebooted

Systems should be back online and functioning. Users are asked to create tickets if there are lingering
NCSA building router in basement 07 (ncsa-07-br)software upgrade on one of the building routers.Traffic failed over to redundant NCSA building router. No impact on the network was observedMaintenance was completed successfully without any
2018-11-07 16:502018-11-07 17:00NCSA JiraJira was rebooted to increase RAM.NCSA's Jira was offline while it's RAM configuration is upgraded.Upgrade was completed successfully without any
2018-11-06 06:002018-11-06 21:45iForge / aForgeQuarterly Maintenance (20181106 Maintenance for iForge)

All systems were unavailable during the maintenance.

Maintenance was completed successfully:

  • aForge returned to service at 21:15
  • iForge returned to service at 21:45

NOTE: OFED was updated to v4 on the clusters during the PM. Some MPI software may need to be recompiled due to changes in libraries (e.g., libpsm_infinipath is no longer present in OFED v4). Frequently used openmpi installations have been updated to accommodate this change. Software compiled against affected MPI software may also need to be recompiled.
2018-11-06 07:002018-11-06 09:00NCSA VPN ServiceThe VPN was upgraded.The NCSA VPN service was down for maintenanceThe NCSA VPN has been
wired networking on 4th floor in NCSA buildingSoftware upgrade on network closet switchesWired network, VOIP phones on 4th floor. NCSAnet Wireless remained available during maintenance window.upgrade was completed successfully without any
2018-11-02 3:30 AM


6:10 AM

iforge cluster

GPFS issue. "ls /usr/local" hangs.

direct access to some directories under /iusr/local was OK.

ie. "ls /usrlocal/modules-3.2.9.iforge" was OK.

iforge login node is currently down.

New ssh connections are hanging.

There is the potential for issues with running jobs.

Scheduler has been paused.

Something odd going on with iforge020 was causing hangs.

Once iforge020 was rebooted, access to /usr/local was unlocked.

Jim Long

2018-10-30 9:00 p.m.


11:00 p.m.

NCSA DHCPPatchesThe DHCP server will be unavailable periodically for reboots and patching. Possible timeouts for DHCP, but generally no interruptions are
wired networking on 3rd floor in NCSA buildingsoftware upgrade on network closet switcheswired network, VOIP phones. NCSAnet Wireless remained available during maintenance window.code upgrade completed successfully without any


2018-10-22 1:00pmIDDS serversPatchesXRAS admin/review/submit UIs, XDCDB Admin UI, NAPSPatches
2018-10-18 08:002018-10-18 12:00LSST

Monthly maintenance

  • firmware update and reboot on monitor01 (monitoring collector)
  • OS & Kernel updates on
  • Puppet code changes
  • monitor01/InfluxDB (and likely the front-end Grafana monitoring, e.g., will be unavailable for a short period of time
  • tus-ats01 will be unavailable for OS & Kernel updates
  • the Puppet changes are intended to be functional "no-ops" and should cause no outage, although we scheduled these changes during our monthly PM window in case something unexpected occurs
maintenance completed
2018-10-17 08:002018-10-17 18:00ICCP

Monthly Maintenance

  • Deploying new kernel with CVE-2018-14634 fix
  • Switching to MTU9000 across
  • GPFS 5.0.2 upgrade
  • Firmware bug fixes applied to DDN SFA14KX 
Total system outagemaintenance
2018-10-17 15:402018-10-17 23:003rd Floor NetworkingPortions of the third floor did not have network connectivity due to a switch malfunction.Portions of the third floor are without network connectivity.The issue has been


08:00 AM


08:50 PM

Blue WatersMaintenance to apply security PatchesAll services for Blue Waters will be down except for ncsa#NearlineOutage extended for 2 hours due to unexpected power loss to 3 rows of
10:00 AM
01:00 PM
DUO 2-Factor AuthDUO Upstream vendor has reported issues with their service.
NCSA systems that use DUO for 2FA might experience intermittent
2018-10-15 7:30 am

2018-10-15 11:00 pm

Nebula, File-serverPower Loss in the NCSA building is causing issues with systemsNebula web services are turned off, File-server is unavailableSystems we brought back online and
2018-10-15 07:352018-10-15 09:15LSSTPower event -> host outage at NCSA 3003

affected: all physical LSST hosts (and VMs) at NCSA 3003:

  • incl. lsst-dev*, lsst-xfer, lsst-l1*, lsst-daq, lsst-dev-db
  • most physical hosts rebooted themselves after the event, although a few L1 systems had to be manually powered on
  • most VMs had to be manually started after the event
2018-10-11 16:302018-10-11 17:00crashplan backup servicecrashplan was upgraded to code42 6.8.4crashplan service was restarted and clients reconnectedcrashplan service has fewer security vulnerabilities
2018-10-092018-10-09DHCPAdditional DHCP attributes will be passed to clients.The Security Operations group has requested that the Web Proxy Auto-Discovery Protocol (WPAD) be set to blank via DHCP to better secure client workstations/laptops. This should not impact any users general network usage.

WPAD has been applied to all user networks at NCSA and NPCF (including wireless).

2018-10-08 17:002018-10-08 21:00Wired networking on 2nd floor in NCSA buildingncsa-2045 Network switch software upgradeWired networking for desktop computers and VOIP phones. Wireless network remained available during maintenanceswitch stack on second floor was upgraded. There were some issues during upgrade process due to which maintenance ran longer than expected. All networking services are restored back to
2018-10-4-16:352018-10-4-16:35jabber.ncsa.illinois.eduThe open fire jabber server stopped working correctly and was restarted.Everyone using jabber reconnected.Jabber rooms are working like they should
2018-10-04 08:002018-10-04 09:15LSSTCritical security patching

ALL LSST systems (incl. lsst-dev01, lsst-xfer, etc. as well as PDAC, verification, and Kubernetes clusters)

The following systems will remain online and unaffected:

  • tus-ats01
Maintenance was
2018-10-03 06:002018-10-03 07:00Campus Cluster - NetworkingMaintenance was performed on the OmniPoP uplink on ur1carne, which is the upstream router for all ICCP based network traffic. Engineers worked to transition the link from old optical transport gear to new gear that is optically protected with automatic failover.All traffic that would normally take this OmniPoP link will reroute through other WAN links on ur1carne. Downtime of < 15 min is expected within the hour window while engineers swing the fiber jumpers from the old optical gear to the new optical gear. There should be no impact to DES or any ICCP customers. Please contact NetEng if you notice any unexpected outages.Maintenance was successful.
2018-10-02 17:002018-06-02 20:00NPCF Networking DC Power SystemTesting and maintenance of the DC power system and battery backup will be performed.No outage.Tests were completed without
2018-09-26 11:002018-09-26 12:00Campus Cluster - MWT2 NetworkingMaintenance was performed on the Internet2 uplink on ur1carne, which is the upstream router for all ICCP/MWT2 based network traffic.MWT2 lost connectivity to LHC1 but everything else rerouted, all of which was expected.The maintenance was successful, no issues have been reported
2018-09-202018-09-24OpenAFS serversOpenAFS file and database servers were upgraded to 1.6.23The OpenAFS servers were upgraded to the latest code without service interuptionNow running with latest security fixes in
2018-09-20 08:002018-09-22 16:50LSST Qservqserv-master01 is having trouble booting after a motherboard replacement during planned maintenance.Qserv in general, specifically qserv-master

2018-09-20 08:002018-09-20 14:40LSST LSPdev

LSPdev kubernetes is having a gateway error after upgrading

LSPdev kubernetes

2018-09-20 08:002018-09-20 14:00LSST

Monthly maintenance (Sep):

  1. Network switch firmware updates/reboots
  2. Lenovo firmware updates/reboots
  3. OS package updates/reboots
  4. ESXi hypervisor updates/reboots
  5. GPFS client changes and upgrade to 4.2.3-10

  6. GPFS server upgrade to 4.2.3-10

All LSST systems and services will be unavailable for the duration of the maintenance period.


qserv-master01 and LSPdev are still having issues. These will be tracked as a separate incidents.
2018-09-19 08:002018-09-19 22:00Campus Cluster

Monthly maintenance

  1. Switching to CentOS 7.5 across cluster
  2. Upgrading gpfs to (client only)

All compute and login nodes were down.

The filesystems were also unavailable due to issues with the change to gpfs and RH7.5

The cluster was back in service at
2018-09-17 17:3020018-09-19:30Wired networking on 1st floor (ncsa-1045)software upgrade on network switch for 1st floor.Wired networking for users on 1st floor was unavailable as network engineering performed software upgrades on their equipment.
Wireless network (NCSAnet) remained available during this time.
Maintenance was completed successfully. Users can contact neteng if they have any issues with their wired network
2018-09-12 06:002018-09-12 09:00DNS1, DNS2DNS1 and DNS2 will be updated/upgraded

DNS servers will be undergoing routine maintenance. During this maintenance window, system

and services will be restarted. One DNS server will always be responsive during the maintenance.

Updates have been
2018-09-11 9:30 a.m.2018-09-11 11:00 a.m.Internet2 100G connectionICCN engineers will be migrating our Internet2 connection to the new ICCN optical equipment.Traffic will fail over to a secondary peering. We expect minimal impact to users. Direct peering will fall back to normal routing.The migration has been
2018-09-11 8:30 a.m.2018-09-11 11:00 a.m.ESnet 100G direct connectionWe will be migrating our ESnet connection to the new ICCN optical equipment.Traffic will fail over to a secondary peering. We expect no impact during this maintenance.The migration has been completed.
2018-10-10 09:00


netact.ncsa.illinois.eduMultiple users reported they were unable to delete their activations or change networks within Netact.netact.ncsa.illinois.eduFixed the bug and tested. Issue was
2018-09-06 11:002018-09-06 12:00MREN Circuit MoveThe MREN WAN circuit is being moved to an optical protection switch.Traffic will be re-routed over an alternate peering during the test
2018-09-06 16:002018-09-06 16:40RSA Authentication ManagerRSA Authentication Manager 8.2 SP 1 P 08 was appliedBoth primary and replica servers were updated with the latest security patchesRunning 8.2 SP1
2018-08-15 08:002018-08-15 20:08Campus Cluster

Preventative Maintenance

  • FSCK on filesystem
  • Reseat and reset management modules on IB core switch
  • BIOS updates on some nodes
  • Upgrade Carne uplink to 2x100G
Total outage

Corrected bad inode on filesystem.

Rebooted IB core switch

2x100G links are working
2018-08-29 09:382018-08-29 10:21Services that utilize Duo 2FA including bastions hosts and VPN.latency issues with DUO1 as per service that uses Duo for authentication including bastion hosts and VPN.Service appears be to be returning to normal as per updates on
2018-08-20 10:202018-08-20 12:00sslvpn.ncsa.illinois.eduintermittent login issues with DUO two factor authentication due to an outage on DUO's end.Two factor authentication to sslvpn service.Duo identified the issue and resolved the outage. Users can connect to sslvpn over Duo 2FA
2018-08-16 11:252018-08-16 12:41SlackSlack is reporting connectivity issues on their status page ( )Slack reported, "connectivity issues impacting all workspaces "

Slack reported this resolved at 12:41, though NCSA users reported it working around 11:38.

2018-08-15 08:002018-08-15 16:00ISDA VM infrastructureUpgrade of all VM servers as well as backend storage systemNCSA opensource, NCSA docker hub, ISDA VM serversUpgrade was
2018-08-15 08002018-08-15 1430Storage Condo MaintenanceAll servers were upgraded to gpfs and the clustered nfs service was implemented as well.Storage CondoUpgrade was
2018-08-14 05:002018-08-14 09:00NCSA will be upgraded to Confluence 6.10.1 and then to 6.10.2.The wiki will be down intermittently during the upgrade. Read the banner at the top of wiki pages for current status.Upgrade was
2018-08-07 07:002018-08-10 12:00iForge ifdbpoc serverHardware issues require migrating to new server; some signs indicate service was impacted prior to 2018-08-07 07:00 but no reports have confirmedifdbpocAdmins migrated data and services to another server. Verification was performed by the apps team. 
2018-08-08 -- 1430hrs 2018-08-10 -- 0730hrsBlue Waters Nearline Endpoint
Due to very high demand for data retrieval from Nearline, a pause rule is in effect to allow manual task scheduling. You may submit tasks as normal and they will be run as quickly as possible.
Data storing and retrieving to/from the Nearline storage system.Many tasks were manually scheduled and completed to help re-balance the system utilization. The endpoint pause rule was lifted and all tasks are running
2018-08-07 07:002018-08-07 22:15iForge / aForgeQuarterly Maintenance ( 20180807 Maintenance for iForge )All systems will be unavailable during the maintenance.

In progress

  • iForge was placed into production at 22:15
  • aForge was brought back online by 19:45

2018-08-03 11:302018-08-03 13:30NCSA VPNA configuration issue caused some VPN users connection problems to some NCSA resources.Some VPN users reported connectivity problems to some internal NCSA resources.A configuration change was applied which corrected the routing
2018-07-27 11:452018-07-27 13:45NCSA WikiThe wiki was being intermittently slow and several software packages and rebooted wiki
2018-07-27 08:002018-07-27 08:15NCSA VPNThe old NCSA VPN ( was decommissioned. All users should be using the new VPN ( was decommissioned.The old VPN has been decommissioned and all users should be using the new
2018-07-26 14:002018-07-26 19:00NCSA RTThe RT help site was being intermittently slow and several software packages and rebooted RT
2018-07-25 14:302018-07-25 14:40NCSA WikiWiki RestartConfluence service restarted
2018-07-24 13:202018-07-24 13:55crashplancrashplan was upgraded to 6.7.3 for latest feature and security updates. Client updates will push out to system automatically over the next few days.all client paused backups for about 2 mins as servers restarted with new running Code42
2018-07-22 19:142018-07-22 19:45NCSA GitLabNCSA GitLab server was updated.
  • Renewed SSL certificate
  • Upgraded GitLab software
  • Increased CPU & RAM 
2018-07-19 18:442018-07-20 10:45:13nebulanebula controller experienced a fatal hardware error on 10gE nic

horizon interface to nebula and all open stack command line tools are non-functional. Keystone authentication services are also off-line.

Instances that were running should continue to run but restarting will probably fail until the controller is repaired. launching new instances will also fail.

Replaced card, is now accessible
2018-07-19 12:00 2018-07-19 12:30

LSST: lsst-dev-db and dependent services, including kubernetes lspdev

Following the July 19 planned maintenance, MariaDB services on lsst-dev-db are unavailable along with dependent services, including:

  • kubernetes lspdev

DB services on lsst-dev-db along with dependent services, including:

  • kubernetes lspdev
2018-07-19 08:002018-07-19 12:00LSST

Monthly maintenance (July):

  1. Dell firmware updates/reboots
  2. OS package updates/reboots
    1. including upgrades to CentOS 7.5
  3. GPFS client changes and upgrade to

  4. GPFS server upgrade to

ALL lsst-dev systems (incl. lsst-dev01, lsst-xfer, etc. as well as PDAC, verification, and Kubernetes clusters)

The following systems will remain online and unaffected:

  • lsst-daq
  • lsst-l1-*
  • tus-ats01

Maintenance was successfully completed, although the following resultant issue is being tracked in a separate status event:

DB services on lsst-dev-db are unavailable along with dependent services, including:

  • lspdev
2018-07-16– 9002018-07-16– 1938BluewatersSystem was upgraded for security issues and to migrate to Cuda 9.1Bluewaters compute and schedulerBluewaters is now updated
avid King

2018-07-09 – 11302018-07-10 – 1700Campus Cluster Monitoring WebpageSET is moving set-analytics to https. This should have been a simple change to a host name, but after the change the new value was not picked up.The monitoring web page gave a loading circle that never resolved to anything.Set up a Grafana instance for the display of the Campus Cluster
2018-06-282018-07-09NebulaNebula was taken offline to repair the filesystemAll Nebula servicesNebula is performing well
2018-06-29 -- 1300hrs 2018-07-08 – 1400hrsBlue Waters Nearline Endpoint
Due to very high demand for data retrieval from Nearline, a pause rule is in effect to allow manual task scheduling. You may submit tasks as normal and they will be run as quickly as possible.
Tasks submitted to Globus will start in a paused state but will be released to run, at the earliest possible time, based on resource availability.Backlog of file stages was cleared and endpoint pause rule
Access to NPCFFor the July 4th UIUC fireworks show, the parking Lots E14 and E14-shuttle will be closed from 6:00 p.m. Monday, July 2nd, through 6:00 a.m. Friday, July 6th. No parking will be allowed in these locations at any time during this period.  Please do not park in the NPCF dock area - use the shuttle buses, or park in lot E46 (south on Oak St.).Parking facilites for NPCF Parking is back to normal 
2018-05-03 14:302018-06-28 09:00iForge gpu queueboth nodes in the general 'gpu' queue were offline due to issues with the GPUsiForge 'gpu' queue could not be usedTried driver updates and engaged with vendors; ultimately got one node working with 4 M40 GPUs rather than the previous 2 K80 GPUs; continue engaging with vendors to get the other node working but queue is now available. 
0800 2018-07-021200 2018-07-02Blue Waters NearlineOne tape library (of four) will be powered down for hardware maintenance (replacement of tape import/export module).Access to tapes in the affected library will be blocked until the system returns to service. Users staging data may see delays in accessing data until the library is back online.Work was completed with some delay (scheduled to complete by 0930) due to a failed SD card (used for storing and loading library geometry)
2018-06-27 9:002018-06-27 1:00LSST - k8s lspdevkub001 unplanned reboot and kub004 ran out of memory.lspdev JupyterHub

Nodes/Services rebooted.

Kubernetes pods restarted.
2018-06-27 08:302018-06-27 11:49SlackSlack is reporting connectivity issues on their status page ( reports, "workspaces should be able to connect again"



Blue Waters Scratch FilesystemTop of Rack network switch died in rack 8. Cray onsite and performed a work around and will replace Monday. Sonexion rack 28 lost mind and was rebooted.Partial scratch outage of ost169-179bypassed faulty switch, rack 28 sonexion rebooted. faulty swich replaced Monday
2018-06-21 -- 1200hrs 2018-06-23 -- 1045hrsBlue Waters Nearline Endpoint
Due to very high demand for data retrieval from Nearline, a pause rule is in effect to allow manual task scheduling. You may submit tasks as normal and they will be run as quickly as possible.
Tasks submitted to Globus will start in a paused state but will be released to run, at the earliest possible time, based on resource availability.Many tasks were pushed through the system by manually ordering them to reduce tape drive competition. Endpoint pause rule removed and all tasks
2018-06-21 08:002018-06-21 09:35LSST

Monthly maintenance (June):

  • pfSense firewall update
  • OS package updates/reboots for CentOS 6.9 servers (lsst-web, lsst-xfer, lsst-nagios)
  • Slurm update (lsst-dev01, lsst-verify-worker*)
  • Update host firewalls on GPFS servers
  • iDRAC configuration updates on lsst-dev01 and ESXi hosts

CentOS 6.9 servers:

  • lsst-web
  • lsst-xfer
  • lsst-nagios

Slurm/verification cluster

Other impact was not expected but unexpected issues could have lead to connectivity issues for other hosts or downtime for lsst-dev01 or hosted VMs

Maintenance was
2018-06-20 14:002018-06-20 19:00Campus ClusterRolling reboot of the core IO servers to move GPFS from to for CentOS 7.5 support; No downtime occurredSuccessful UpgradeCluster now supports CentOS 7.5
2018-06-182019-06-20 7pmNebulaNebula was shut down to fix broken filesystems.All Nebula servicesNebula is up and running again. Please contact if you still see
2018-06-19 08:002018-06-19 12:00LSST L1 Test Stand

Scheduled Maintenance:

  • BIOS firmware updates
  • Puppet and firewall changes (including support of SAL unicast/multicast traffic)
  • OS package updates (staying with CentOS 7.4)

Level One Test Stand, including:

  • lsst-daq
  • lsst-l1-*
 Maintenance completed
2018-06-18 07:002018-06-18 09:30vSphere & Various VMsTwo of our hosts went down with network interface errors.Multiple VMs hosted on those nodes (incl. Fileserver, ncsa-print, and subversion)Both hosts are back online as well as all
2017-06-16 22:18:322017-06-17 08:10:00


PBSPro server was hung on cfsched

Job scheduling and job submission were failing.restarted PBSPro server on cfschedJim Long
2018-06-15 1330hrs2018-06-15 1530hrsBlue Waters NearlineReplacement of a tape robot transporterThis work is not expected to impact operations. The library system will continue to operate with a single transporter but mount times may be somewhat longer until the second unit is returned to 
2018-06-12 04:3010:00Blue WatersThunderstorms have resulted in a power interruption. This outage impacts both the compute nodes and all filesystems. Therefore, a full reboot will be necessary.Return to service is estimated to be approximately 10 am Central time.Blue Water in totalFull reboot 
2018-06-12 ~03:452018-06-12 ~06:00Campus ClusterMany compute nodes rebooted. No system on UPS was affected, and some compute nodes remained up. Facilities at ACB report that there were no power events this morning or last night, but this seems the most likely cause.Many compute nodes, but not all. Jobs on the nodes that rebooted were lost.Nodes rebooted at a similar time, and many returned in a state unsuitable to run jobs. Rebooting in smaller groups got everything working
2018-06-12 ~03:402018-06-12 ~06:30iForge

A storm caused a brief power event which impacted:

  • big_mem queue
  • skyake queue
All nodes in the big_mem and skylake queues were rebooted by the power event.Nodes rebooted on their own and were marked back online in the scheduler by around ~6:30am. 
2018-06-12 ~03:402018-06-12 09:00LSST

Storm caused power event which impacted:

  • Kubernetes Commons / lsst-lspdev
  • 75% of verification cluster compute / Slurm


The following nodes rebooted because of the power event:

  • all kub* nodes (causing outage of Kubernetes Commons / lsst-lspdev)
  • 75% of verify-worker* nodes (partial outage of Slurm / verification cluster compute nodes
  • verify-worker nodes were put back online in Slurm around 06:10
  • Kubernetes Commons resumed service by around 09:00
2018-06-11 08:302018-06-11 8:35Campus Cluster ADSVlan changes on campus clustercampus cluster - Active data storage (ADS)Maintenance completed
2018-06-07 06:302018-06-07 14:00Blue WatersThe boot node crashed requiring the system to be rebooted. File system and ESLogins remain up.All running jobs were lost, no new jobs were started until system is return to service, Torque was updated to ver. 
2018-06-01 00:502018-06-01 03:50Blue Waters/var space filled up by additional logging in Moab to troubleshoot job slide issue.PBS server went down due to no space in /varZipped and moved old Moab logs to lustre file system to free up /var space, then restarted PBS
2018-05-31 14:002018-05-31 14:10NCSA Open SourceRetirement of both HipChat and FishEye/CrucibleServices will be shutdown and archived.Services are disabled and will be archived in a month. 

2018-05-31 08:00


2018-05-31 11:55NCSA ITS vSphere vCenterITS vSphere vCenter server will be upgraded to the latest VMware vCenter 6.7 All VMs will remain online during the maintenance, but management through vCenter will be offline during the upgrade.Successful upgrade to VMware vCenter
2018-05-23 06:552018-05-24, 1900hrsCampus Cluster File SystemA failure of both disk array controllers serving the CC file systems resulted in abrupt loss of access to the underlying storage. One array controller was identified as broken while the storage system was brought back up on the remaining controller for inspection and analysis. A thorough check of the file systems and storage devices was started. At 1100hrs May 24th the replacement array controller arrived and was installed. After further testing to assure system stability, the file systems were brought back online and released to the cluster admins.All campus cluster file systemsNormal cluster operations were resumed. Investigation into the root cause is ongoing with the cooperation of the system manufacturer. 
2018-05-21DNS1/2There were a few reports of intermittent DNS lookups failures/slowness Firewall state tables resources were being exhausted. Limits for those state tables have been increased. This appears to have resolved the problem. No further reports of the issue, after making the adjustment.  
2018-05-24 10:55am

2018-06-24 11:08am System is being upgraded and rebootedNo services should be affectedyum upgrade and reboot  
2018-05-17 8:002018-05-17 15:00NPCF-Core-EastThe hardware and firmware on the core east router was be upgradedTraffic rerouted through npcf-core-west during the maintenance window. There was an unexpected outage for about 10 mins which impacted network connectivity throughout NCSA.Upgrade on core-east was completed successfully. No further network outages are expected. 
2018-05-09 7:002018-05-09 17:40dns1.ncsa.illinois.eduEnabling BIND on ipv6 and enabling a firewall on the serverNo impact is expected.Maintenance was completed. 

Monthly maintenance (May):

  • GPFS server & client updates, plus nosuid mounting
  • Physical firewall changes in NPCF for new vLANs
  • BIOS firmware updates
  • OS updates
  • Update of puppet-stdlibs module
All systems (except lsst-daq, lsst-l1-*, & tus-ats01) were unavailable for maintenance.

Maintenance was extended until 13:30 and then completed.

External Grafana monitoring ( was offline until 14:25 due to storage rebuild on lsst-monitor01.

2018-05-17 10:132018-05-17 10:18Core OutageDuring core router maintenance the incorrect core router was powered off.Network connectivity across NCSA was affected.The core router was powered back on, verified and brought back into service. 
2018-05-16 08:00


2018-05-16 17:40Campus Cluster

Monthly maintenance (May)

  • GPFS upgrade to
  • FW upgrade on Juniper switches
  • OS updates
  • Add 4 more 40G cables for ccioe nodes for redundancy
Entire system was unavailable for maintenance.Maintenance complete, all tasks complete. 
2018-05-16  1100hrs2018-05-16 1300hrsADSPlanned Campus Cluster network upgrades also impacted access to ADSAll ADS storage exports became unreachableEric has notified us that the networking maintenance is complete and ADS customers are able to access their storage again. 
21 Mar 201814 May 2018openxdmod.ncsa.illinois.eduAn update to Torque broke the updates of XDMoD. was offline while the system it resided on was updated, all the dependency software was installed, and the latest version of XDMoD was installed. Then all the data had to be re-imported.Software updatingService restored with updated software. 
2018-05-08 00002018-05-09 0015NCSA Storage Condo
One node ran out of memory, causing a deadlock in GPFS. During deadlock recovery, GPFS shut down on multiple nodes. Upon restart of the cluster, a different metadata server had a check on its PCI bus, forcing another unmount. All file systems but one were recovered. While recovering the last one, one of the Roger NetApp storage arrays started throwing errors, requiring a power cycle of the controller and disks, prompting a final recovery of the last file system.
Condo file systems and services.All file systems recovered and services restored. 
2018-05-08 07:002018-05-08 07:40iForgeQuarterly Maintenance ( 20180508 Maintenance for iForge )All systems were unavailable during the maintenance.Planned maintenance completed successfully 
2018-05-08 8:002018-05-09 8:00NPCF-Core-WestThe hardware and firmware on the core router will be upgradedTraffic will be rerouted through npcf-core-east during the maintenance window. No impact is expected.The hardware and firmware was upgraded on npcf-core-west without incident. Traffic has been successfully failed back. 
2018-05-03 08:452018-05-03 10:15NCSA WIKI, JIRA, services that rely on NCSA LDAPLarge amount of connections from two particular servers were hitting LDAP, causing the slow-down that in term caused timeouts for various applications using LDAP authentication. Blocking the cuplrit servers remedied the situationNCSA WIKI, NCSA JIRA, other applications that rely on NCSA LDAP authentication.Culprit servers were blocked 
9:00am9:25amsyslog-sec.ncsa.illinois.eduout of cycle patching of Security Syslog collectors to address CVE-2018-1000140Load balance fail over to secondary collector, RELP will be buffered.

relay-01 was updated and loadbalancer failed back.


4/25 14:004/25 15:00MREN WAN CircuitWAN circuit testing.Traffic will be re-routed over an alternate peering during the test period.The MREN circuit was brought back in to production. 
2018-04-24 12:302018-04-24 16:00NCSA jabber servicejabber was down while we repaired its authorization wasn't accepting jabber loginsjabber working again. 
2018-04-24 0 9:102018-04-24 0 9:50LSSTincreased LDAP timeout to 60 seconds in sssd.conf to fix problems with long login times and failure to start batch jobskub*, verify-worker*

sssd.conf updated, sssd restarted

verify-worker nodes were drained during the change

affected nodes may have slow LDAP response times for a short while (due to local cache needing rebuilt)

04/18/2018 10:3004/18/2018 11:30ICCP April MaintenanceReplaced 4x10G links from cc-core0 to carne. Updated BIOS on remaining parts of Cluster nodes.No outage.Completed without any outage. 
04/18/2018 10:3004/18/2018 11:30ICCP core switchesOne of the 4x10G links from cc-core0 to carne had incrementing errors and has been administratively down to prevent those errors from affecting traffic. There was a scratched fiber that earlier diagnosis had revealed, so we replaced the fiber during this ICCP PM.Nothing, all traffic rerouted through cc-core1The errors are still incrementing, but we've narrowed down the remaining options for what might be going on. 
4/12 09304/12 1830ADS NFS/SambaThe ESXi Hypervisor server had an error on it: 'A PCI error requiring a reboot has occurred.'.ADS NFS/Samba/GridftpThe server was rebooted, the error cleared and all systems/services were restarted. 
4/11 03:00 p.m.4/11 03.15 pmNetactNetact code was updated. Going forward new office activation names will have "-ofc" appended to them.No service impact to Netact.Change was successfully implemented. Netact remained in service during and after the change. 
4/11 9:004/11 10:00LSST NPCF FirewallPrimary firewall will be upgraded to use FRR instead of openBGP.No impact is expected.  The firewalls do not need to be failed over and no interruption in traffic flow is anticipated.Firewall was successfully migrated.  No downtime occurred. 





dns1.ncsa.illinois.eduOS Patching and BIND updatesdns1 (secondary DNS server) will be rebooted to apply patches. DNS2 will remain up.DNS1 OS patching is completed. BIND was upgraded to 9.11. BIND is only bound currently to its ipv4 interface. 





dns2.ncsa.illinois.eduOS Patching and BIND updatesdns2 (secondary DNS server) will be rebooted to apply patches. DNS1 will remain up. An IPv6 address will also be added to system in preparation for a broader IPv6 DNS rollout.DNS2 OS was patched. BIND was upgrade to 9.11. IPv6 Address was also enabled on the server and BIND is listening on that address. 





MREN WAN CircuitPort MoveTraffic will be re-routed over an alternate peering during the maintenance. The port was moved and the circuit was brought back into service without issue. 
LDAPLDAP process crashedAuthentication to LDAP-backed servicesLDAP was upgraded and restarted 





MREN WAN CircuitPort MoveTraffic will be re-routed over an alternate peering during the maintenance.The port was moved and the circuit was brought back into service without issue. 



MREN WAN CircuitWAN circuit testing.Traffic will be re-routed over an alternate peering during the test period.Testing was completed and the circuit was brought back into service. 
2018-03-21 08:002018-03-21 17:30Campus Cluster manage server and compute nodes except DES and MWT2Deploying new management server, upgrading to Torque 6.1.2 and Moab 9.1.2. Bios update. Configuration changes on GPFS servers. Tech Service CARNE code upgrade.Scheduler down. User access disabled

New management server is up with Centos7. Installed Torque 6.1.2 and Maob 9.1.2. Bios update are done on most nodes. Configuration changes on GPFS done. Tech services CARNE code upgrade done.

2018-03-16 1:00pm2018-03-16 5:45pmISDA + NCSA OpenSource

Security patches of VM servers as well as backend filesystem

Updates of Bamboo, JIRA, Confluence, BitBucket and CROWD 

All systems will be unavailable for a brief period of time.

During updates of OpenSource services part of OpenSource will be offline for up to an hour.

Updated fileserver (brief struggle with zfs and kernel updates). Updates of proxmox servers, Updated JIRA, Confluence, ButBucket and CROWD. Bamboo will be done later this weekend. 

Nebula Openstack cluster

Security and filesystem patchesAll instances and Nebula services were unavailableFilesystem updates and security patches were applied. Filesystem is more responsive, but ~20 instances are repairing from problems that occurred before the outage. 
2018-03-15 16:20LSST

Lingering issues on select nodes following March PM

  • lsst-qserv-master01 - cannot mount local /qserv volume
  • lsst-xfer - issue w/ sshd
  • lsst-dts - issue w/ sshd
  • lsst-l1-cl-dmcs - unknown issue
  • lsst7 - issue w/ sshd

Following resolved by 13:23:

  • lsst-qserv-master01
  • lsst-xfer
  • lsst-dts
  • lsst-l1-cl-dmcs

Resolved by 16:20:

  • lsst7

March maintenance:

  • GPFS server updates and configuration of additional NFS/Samba services
  • Urgent Firmware updates
  • Increase size of /tmp on lsst-dev01
  • Hardware maintenance/memory increases on select servers/VMs
  • Release of refactored Puppet code for NCSA 3003 servers
  • OS updates
  • Recabling servers in NCSA 3003 to new switches
All systems were unavailable for maintenance.Completed and most systems back online. Lingering issues for lsst-qserv-master01, lsst7, lsst-xfer, lsst-dts, and lsst-l1-cl-dmcs are being tracked in a separate status event. 





Remote Access VPNAn issue with authentication for the VPN has occurred.Any new connections will not be established. Existing connections are unaffected.Authentication services were restored. 
2018-03-09 10:08am2018-03-09 11:00amCampus ClusterAccording to IBM, cc-mgmt1 was a culprit on halting communication across the cluster during the GPFS snapshot process.User can't login or access to filesytem.Rebooted cc-mgmt1 and restarted services (RM & Scheduler). 
2018-03-09 06:052018-03-09 08:00public-linux,, & events.ncsa.illinois.eduA routine kernel upgrade resulted in failure of the OpenAFS client on these servers.OpenAFS storage was unavailable on these servers, resulting in the website failures.Resolved. Packages were updated and OpenAFS reinstalled. 
2018-03-07 15:002018-03-07 16:10LSSTqserv-db12 had one failed drive in the OS mirror replaced but the other was presenting errors as well so the RAID could not rebuild. The Qserv system would have been unavailable during this maintenance.qserv-db12The node was taken down for replacement of the 2nd disk, to rebuild the RAID in the OS volume, and to reinstall the OS. 





ESnet PeeringThe connection servicing our direct peering with ESnet will be moved during this window.Connections will be rerouted over a redundant peering. No service impact is expected.The connection was successfully migrated and the peering with ESnet was brought back into service without issue. 





WAN Connectivity DegradedThe router servicing several of our WAN connections is currently in a degraded state.Traffic has been gracefully rerouted. No user facing connectivity issues have been reported.Graceful failover to the backup routing engine cleared a fault condition and affected peerings were re-established. 
2018-02-27 07:152018-02-27 09:10Campus Cluster schedulerScheduler become unresponsiveJob submission & starting new jobsRebooted the node, restarted RM & Scheduler. 
2018-02-26 06:002018-02-27 01:35All Blue Waters ServicesSecurity Patch CLE, SU26 Lustre patchAll Blue Waters resources are unavailableBlue Waters returned to service at 1:35AM 27th Feb, with HPSS returned earlier at 10PM 26th Feb. 
2018-02-23 16:302018-02-23 16:30Kerberos Admin serviceKDC configuration was modified to allow creation of service principles that can create and modify host and service principles.kadmin service was unavailable for 1 second while new config was read.

We can now delegate to group or users the ability to create and manage host keys and service principles.

2018-02-23 08:002018-02-23 09:00LSST Puppet ChangesRolled out significant logic and organization of the Puppet resources in NCSA 3003 data center in order to standardize between LSST Puppet environments at NCSA. We had done extensive testing and did not expect any outages or disruption of services.

No interruption of services.

Changes being applied to: lsst-dev01, lsst-dev-db, lsst-web, lsst-xfer, lsst-dts, lsst-demo, L1 test stand, DBB test stand, elastic test stand.

Updated successfully with no interruption of availability or services. 
2018-02-21 13:302018-02-22 00:39ESnet 100G Peering DownThere is a suspected fiber cut between Urbana and Peoria on ICCN optical equipment. Our 100G direct WAN path to ESnet rides over this optical path and is thus currently down. The fiber vendor has identified the source of the problem (high water caused the fiber to be pulled out of a splice case)Nothing. All traffic destined for ESnet or resources that would normally take the ESnet WAN path will reroute through our other WAN pathsRepaired. 
2018-02-21 08:002018-02-21 20:00Campus Cluster

Campus Cluster February Maintenance

  1. Applying security patches & OFED upgrade
  2. Testing/tuning metadata performance
  3. Troubleshoot/upgraded code on cc-core switches
 All systems were unavailable

Completed partially and following items are reschedule for next maintenance.

  1. Deploying new scheduler (due to a system stability)
  2. Upgrading Torque 6.1.2 and Moab 9.1.2 (not enough time for testing after release)
  3. Maintenance on CARNE router (bug in the code)
2018-02-05 10:452018-02-21 13:30ICCP Networking - Outbound

A hardware failure on one of the two core switches for ICCP caused that switch to enter a degraded service mode and eventually fail completely. This was also combined with software bugs that caused looping of packets between the two cores in the MC-LAG. The other core was still functioning properly and was providing connectivity for all ICCP/ADS/DES systems normally for the duration of the degraded service time period. A hardware replacement RMA was initiated. The hardware came in but the hardware alone did not fix the issue. We then waited until a ICCP PM where we could test things without interruption of service and we upgraded the code and put in some bug mitigation configuration changes. These things combined solved the issues.

Nothing as far as production. During the period where cc-core0 was down, aggregate bandwidth outbound was 40Gbps instead of the normal 80Ghps.As of now the cores are both in production and stable. 
2018-02-16 12:002018-02-16 12:30IPSEC VPNThe appliance servicing various IPSEC VPN connections was patched.NothingPatch was successful utilizing the failover capability of the VPN cluster to mitigate any service interruptions 
2018-02-15 08:002018-02-15 13:00LSST

February maintenance:

  • Updating GPFS mounts to access new storage appliance
  • Rewire 2 PDUs at NCSA 3003
  • Switch stack configuration changes at NCSA 3003
  • Routine system updates
  • Firewall maintenance NPCF
  • Updates to system monitoring
 All systems were unavailable.Completed and all systems back online. 
2018-02-13 08:002018-02-06 09:00Certificate System Firewall 2Upgrade software to current production version. No interruptions to service expected CA servicesFW upgraded - services were interrupted due to failed routing service. 
2018-02-13 06:002018-02-13 06:30AnyConnect VPNPatches are being applied to the AnyConnect VPN applianceAccess to the NCSA AnyConnect VPN will be unavailable.The VPN has been patched and client connections have been re-established. 
2018-02-10 02:002018-02-10 10:35Campus ClusterGPFS snapshot hang and lock the filesystemAll systems were inaccessible. Lost running jobs.Gather information for IBM, bounce the filesystem and reboot the cluster 
2018-02-06 07:002018-02-06 17:35iForgeQuarterly Maintenance ( 20180206 Maintenance for iForge )All Systems were unavailable during the maintenance.Planned maintenance completed successfully 
2018-02-06 08:002018-02-06 09:00Certificate System Firewall 1Upgrade software to current production version. It is expected that current connections will be interrupted and a retry will be required.
  • NCSA TFCA Myproxy
  • XSEDE Myproxy
  • Completed
2018-02-01 16:302018-02-01 16:45sslvpn.ncsa.illinois.eduWe are rebooting our VPN appliances to mitigate a critical security vulnerability that allows for remote code execution exploits. That vulnerability is described here: Industry partners' site-to-site VPNsVPN rebooted without incident. Service was restored at 4:34PM. 
2018-02-01 16:302018-02-01 16:45vpn.ncsa.illinois.eduWe are rebooting our VPN appliances to mitigate a critical security vulnerability that allows for remote code execution exploits. That vulnerability is described here: Industry partners' site-to-site VPNs and the NCSA remote access VPN service will be down during the maintenance. Any users connected to the NCSA VPN at the time of the maintenance will lose connectivity.VPN rebooted without incident. Service was restored at 4:34PM. 
2018-01-29 10:052018-01-29 10:10LSST verify worker nodes and lsst-devA network flap on the LSST network caused GPFS ejection of some nodes. Network and security is investigatinga few of the LSST nodes for 2-5 minutes and 2 jobsQualys scan time frame changed and investigatino continues. 
2018-01-29 12:272018-01-29 12:31NCSA Jabber serviceJabber service was restarted to install a new SSL certificate.NCSA Jabber was down momentarilyNCSA Jabber restarted with new SSL certificate 
2018-01-26 13:002018-01-26 13:15LSST NFS service slowdownA cron for lenovo system cleanup was run, and caused the lenovo box to showdown services. The NFS service was starved.lsst-dev NFS showed stale mountscron deleted, and re-written. 
Wed 1/24/2018 13:35Wed 1/24/2018 14:55LSST NFS serviceWe were notified by NCSA security team that there was a stale NFS mount on one of the LSST test nodes. NFS services stopped workingAll NFS mounts for LSST systems such as lsst-demo and lsst-SUI were not workingNFS server was rebooted. 
Tue 1/23/2018 23:00Wed 1/24/2018 01:25Condo storage servicesHit a known bug in GPFS for quota management.All Condo services from 11pm to 1:25 amNeed to upgrade to a newer level of GPFS, but for now we have lowered frequency of the check_fileset_inodes script 
2018-01-22 07:002018-01-22 13:05Blue Waters Compute NodesBlue Waters compute nodes were bounced to resolve issues caused by previous home file system outage (due to bad OST)Compute nodes were down, scheduler was paused.Compute nodes were bounced successfully and returned to full service. 
2018-01-21 08:422018-01-21 11:30Netsec-vc switch stack - FPC 4Switch member 4 of the netsec switch stack was down. Severe filesystem corruption occurred on the primary partition.Any hosts connected to member 4 of that switch that were not redundantly connected to other switches in the stack.The switch was repaired by doing a full reformat/reinstall of JunOS. Everything is back into production. 
2018-01-20 22:002018-01-21 0300Condo file systemsBringing the Roger disk into the condo, commands executed from the Roger GPFS servers caused the cluster to arbitrate for GPFS servers.All condo file systems mounted on nodes.The SSH configuration was changed on the Roger GPFS servers to include the Condo GPFS server IP's. All file systems were returned to normal with no other problems and no remounts required. 
2018-01-18 17:002018-01-19 15:00ISDA Hypervisors, NCSA Open SourceHypervisor updates.All systems were down for short amount of times as hypervisors rebootedAll patches applied. 
2018-01-18 00:002018-01-18 24:00Campus ClusterCopying all data to new filesystem.  Deploying new Storage (14K).  Dividing cluster into two (IB & Ethernet).  Upgrading GPFS to  Deploying new management node and new image server (if time permit).  Applying Security patches to compute nodes(no FW update at this time).All systems unavailable.New Storage System was brought online, additional capacity and performance was added. 

2018-01-18 18:40

2018-01-18 23:00

LSSTLSST Firewall outage in NPCF. Both pfSense firewalls were accidentally powered off.

PDAC (Qserv & SUI) and verification clusters were inaccessible, as well as introducing GPFS issues across many services, e.g. lsst-dev01.

The pfSense firewall appliances were power cycled and services restored. 
2018-01-18 12:582018-01-18 14:10Code42 Crashplan backup systemCode42 Crashplan server were upgraded to latest JDK and Code42 6.5.2.Clients were unable to perform restores or push files into backup archive from roughly 13:35 - 13:55Code42 servers are now running latest security updates to the crashplan service. 
2018-01-18 08:002018-01-18 10:00LSSTMonthly OS updates, network switch updates, firmware updates, etc.All dev systems unavailable. Qserv and SUI nodes will remain available.
2018-01-17 10:352018-01-17 13:00RSA Authentication Manager ServersUpgraded to Authentication Manager 8.1sp1p7No systems should have seen any impactLatest security patches are applied. 
2018-01-12 06:002018-01-12 10:00Decommission NCSA Rocket.chatThe old NCSA service was shutdown.

Any archived conversations or content are no longer be available to users.


NCSA service was shutdown and redirected to NCSA @ Illinois Slack . 
Friday, Jan 12th,  0000-0600 CSTInternet2 Engineers from Internet2 will be migrating our BGP peering with I2's Commercial Peering Service (CPS) to a new location. Small disruptions may occur with the maintenance for the CPS service, but no user traffic disruptions should occur.None, Alternatives routes are present.noneMaintenance was completed successfully. 
2018-01-11 08:002018-01-11 13:30LSSTCritical patches on lsst-dev systems (incl. kernel updates).All systems unavailable.


Thursday, Jan 11th, 0000 CSTThursday, Jan 11th, 0400 CST

Connectivity to Internet2 and backup LHCONE peerings - ICCP and MWT2 respectively

Engineers from Internet2 performed maintenance that affected certain BGP peerings that exist on the device that is ICCP/MWT2's upstream router, CARNE. Specifically, both the 100G Internet2 peering and the Internet2 LHCONE peering on CARNE were disrupted during this timeframe. MWT2 currently gets to LHCONE through CARNE's ESnet peering, which was fully functional. They also were able to get to UChicago through CARNE's OmniPoP 100G peering. As for ICCP, traffic to/from Internet2 based routes rerouted through the ICCN. Nothing was reported to be service impacting by this maintenance from neither ICCP nor MWT2.Successful maintenance was completed. 
2018-01-08 10:472018-01-08 11:30NebulaStorage nodes lost networkingAll nebula instancesStorage nodes were brought back online, instances were rebooted 
2018-01-05 17:00NebulaNebula was shut down for hardware and software maintenance from January 2nd, 2018 at 9am until January 5th, 2018 at 5pm. Spectre and Meltdown patches were applied, as well as all firmware updates, OS/distribution updates, and the filesystem was upgraded.All systems were unavailable.Faster system that is now homogenous, so OpenStack upgrades are now possible. 
2018-01-04 17:002018-01-05 20:00Blue WatersOne OST hosting the home file system has three drives failed simultaneously.Portion of home file system (with data on the affected OST) are not accessible.

Repair works were carried out on the failed OST. Scheduler continued to operate but restricting only jobs not affected by the failed OST to start.

Full operation resumed after successful recovery of the failed OST.

2017-12-20 08:002017-12-20 10:00LSST(1) Firewall maintenance (08:00-09:00) and (2) migration of NFS services (08:00-10:00).

Firewall maintenance: There should be no noticeable effect but scope of service includes most systems at NPCF (including PDAC, SUI, and Slurm/batch/verify nodes).

Migration of NFS services: SUI and lsst-demo* nodes.

Maintenance completed without issues. 
2017-12-14 06:002017-12-14 20:30LSSTMonthly OS updates, network switch updates, firmware updates, etc.All systems unavailable.

All systems back online.
We ran into issues with the policy based routing on the LSST aggregate switches in NPCF that caused the outage to be extended longer than planned.

2017-12-13 09:002017-12-13 11:00JIRA UpgradeUpgraded JIRA to version 7.6 from 7.0NCSA JiraSuccesfully upgraded 
2017-12-13 06:302017-12-13 07:39NCSA JabberAttempted to upgrade Openfire XMMP jabber software.NCSA Jabber was unavailable during the upgrade.The upgrade failed. Jabber is available, but still running the old version. The upgrade will be rescheduled. 
2017-12-11 10:002017-12-11 16:00Unused AFS fileserver were upgraded to 1.6.22After moving all volumes to servers updated on 2017-12-07, the now unused AFS servers were upgraded to OpenAFS 1.6.22.No impact to other systems as they were unused at the time they were upgraded.All of NCSA's afs cell is running on OpenAFS 1.6.22 
2017-12-09 03:002017-12-09 07:42BlueWaters PortalThe BlueWaters portal software crashed. Automated monitoring processes did not restart it correctly.The BlueWaters portal website was unavailable.The BlueWaters portal service was manually restarted and the website is available. 
2017-12-09 1000hrs2017-12-09 1400hrsGlobus Online ( Please be advised that the Globus service will be unavailable on Saturday, December 9, 2017, between 10:00am and 2:00pm CST while we conduct scheduled upgrades. Active file transfers will be suspended during this time and they will resume when the Globus service is restored. Users trying to access the service at   (or on your institution's branded Globus website) will see a maintenance page until the service is restored.
All NCSA Globus endpoints.  
2017-12-072017-12-07Unused AFS file servers were upgraded to 1.6.22Three unused AFS fileserver were upgraded to the latest 1.6.22 release of OpenAFSNo impact to other systems as they were unused.These AFS fileserver can no longer be crashed by malicious clients. 
2017-12-072017-12-07AFS database servers were upgraded to 1.6.22The three database servers were upgraded to the latest 1.6.22 release of OpenAFSNo modern clients noticed the staggered updates.These servers can no longer be crashed by malicious clients. 
2017-12-05 16.002017-12-05 16:20dhcp.ncsa.illinois.eduNCSA Neteng will be migrating the DHCP server VM to Security team's VMware infrastructure.

- Hosts on the NCSAnet wireless network might be impacted.
- Any activated hosts that might be on the roaming range might be impacted.
+ Illinoisnet and Illinois_Guest wireless will be available at ALL times.
+ Wired network connection will be available throughout the maintenance window.

Maintenance was completed successfully and services are running as expected. 
2017-12-02 09:302017-12-02 11:45NCSA opensourceUpgrade of Bamboo, JIRA, Confluence, BitBucket FishEye, and CROWDSub services of opensource can be down for a short time.All services upgraded and running as normal. 
2017-11-20 18:21

2017-11-29 14:30

ROGER OpenStack clusterI/O issues highlighted that GPFS CES NFS servers probably shouldn't run 400+ days without rebootROGER's OpenStack and the various services which were hosted therein, including JupyterHub Serverreboot of all nodes, including CES servers as well as the reboot of all hypervisors (with the fallout being one node required fsck and second reboot and another node/hypervisor is still unavailable) cleared most of the problems. I/O contention was felt as many instances were simultaneously attempting to start/restart. instances that were housed on the unavailable node are being migrated to another hypervisor 
2017-11-21 9:00

2017-11-22 14:00

Open Source

ISDA servers

Update the fileserver that hosts VM's
all the XEN servers.

NCSA Open Source unavailable
Most of ISDA servers unavailable


Network issues delayed updates
All hosts updated and everything back to normal.

2017-11-21 16:002017-11-21 16:40Code42 CrashplanThe Code42 crashplan infrastructure was upgraded to version 6.5.1 to apply security and performance improvementsClients transparently reconnected to servers after they restartedNow running on Code42 version 6.5.1 
2017-11-20 9:002017-11-20 16:38Nebula Openstack clusterNebula OpenStack cluster was unavailable for emergency hardware maintenance. A failing RAID controller from one of the storage nodes and a network switch were replaced.Not all instances were impacted. Running Nebula instances that were affected by the outage were shut down, then restarted again after we finished maintenance.

Nebula is available.
No additional maintenance is needed for Tuesday, November 21.

2017-11-16 16:462017-11-20 12:40NCSA JIRAJIRA wasn't importing some email requests properly after the NCSA MySQL restart.Some email sent to JIRA via help+ addresses wasn't being imported.JIRA is now accepting email and all email sent while it was broken has now been imported as expected. 

BW LDAP Master

(Blue Waters)

Scheduled maintenanceUpdated LDAP lustre quotas to bytes and add archive quotas. IDDS will track and drive quota changes with acctd.Production continued w/o interruption. BW LDAP master was isolated, lustre quotas changed to bytes with the addition of archive quotas. Replicas pulled updates w/o error. 
2017-11-16 14:302017-11-16 16:52Internal website (MIS Savanah)A database table used by MIS tools became corrupted.The website would become unresponsive every time the corrupted database table was accessed.OS kernel and packages where updated during debugging. The MIS database table was restored and the website came back online. 
2017-11-16 16:462017-11-16 16:48NCSA MySQLThe NCSA MySQL server had to be restarted in order to delete the corrupted table used by MIS.All services that use MySQL were down during the outage. This includes: Confluence, JIRA, RT, and lots of websitesMySQL was restarted successfully. 
2017-11-16 08002017-11-16 1200LSSTMonthly OS updates, plus first round of Puppet technical debt changes (upgrading to best design & coding practices)

All systems unavailable from 0800 - 1000 hrs.

GPFS unavailable from 0800 - 1000 hrs.

PDAC systems unavailable from 0800 - 1200 hrs.

Completed. OS kernel and package updates. Slurm upgrade to 17.02.

2017-11-15 13:302017-11-15 15:10RSA Authentication ManagerRSA Authentication Manager were patched to fix cross site scripting vulnerabilities and other fixesNothing was affected by the updateRSA Authentication Manager is running 8.2 SP1 P6. Process worked as expected. 
2017-11-15 - 13:302017-11-15 - 14:30BW 10.5 Firewall Upgrade Part 2The normal active, "A" unit, NCSA BW 10.5 Firewall will be upgraded and then normal fail-over status will be re-enabled.The possibility of connection resets when the A unit comes back from being upgraded and state is being sycned.Completed, process worked as expected. 
2017-11-14 11:272017-11-14 11:33LDAPLDAP was unresponsive to requests.Several services hung while authentication was unavailable.LDAP services were killed and restarted. 
ROGER Hadoop/Ambaricg-hm12 and cg-hm13 took minor disk failures which crashed the nodeAmbari was effectively off-linerebooted node, and node ran fsck as part of its startup sequence, node booted properly 


ROGER hadoop/ambarihard drive failures on cg-hm10 and cg-hm17certain ambari services and HDFScg-hm17 returned to service after power cycle and reboot, cg-hm10's hard drive didn't respond to a reboot 
2017-11-11 16:582017-11-11 19:09Blue WatersWater leak from XDP4-8 causing high temperature to c12-7 and c14-7. EPO on c12-7 and c14-7.Scheduler was paused to place system reservations on compute nodes in affected cabinets, then resumed. 
2017-11-10 14:002017-11-10 14:45NCSA Open SourceUpgrade of the following software: Bamboo, JIRA, Confluence, and BitBucketUpdates will happen in place and will result in minimal downtime of components.completed, minimal interruption of service 
2017-11-10 - 08:002017-11-10 - 08:30CA Firewall Upgrade - B unitthe stand-by, "B" unit, NCSA Certificate Service Firewall will be upgraded to same version as A unit.Expect no impact to services completed, no interruption of service 




17:30 was migrated to Security's VMware infrastructure.During the downtime users weren't able to activate or deactivate their network connections via Netact.Migrated successfully. Netdot is up and running. 
2017-11-08 06:002017-11-08 15:00ITS vSphere vCenterITS vSphere was upgraded to the latest version of VMware vCenter. New access restrictions were also be put into place.All VMs remained online during the maintenance, but management through vCenter was offline during the upgrade. Upgrade completed successfully. 
2017-11-08 09:302017-11-08 10:00BW 10.5 Firewall Upgrade Part 1the stand-by, "B" unit, NCSA BW 10.5 Firewall will be upgraded and then traffic redirected through it for load testing before the "A" unit is upgradedExpect no impact to servicesUpgrade completed successfully. Some states were reset when traffic switched to the B unit. 
2017-11-07 7:002017-11-07 18:37iForge

quarterly maintenance

Update OS image.
Update GPFS to version 4.2.3-5
Redistribute power drops.
Update TORQUE.
BIOS updates.

iForge (and associated clusters)

All production systems are back in service

2017-11-07 - 13:302017-11-07 - 15:00CA Firewall Upgrade Part 2The normal active, "A" unit, NCSA Certificate Service Firewall will be upgraded and then normal fail-over status will be re-enabled.The possibility of connection resets when the A unit comes back from being upgraded and state is being sycned.Completed upgrade 
2017-11-06 15:282017-11-06 15:53Blue WatersEPO happened to c12-7 and c14-7.HSN quiesced.Scheduler was paused to place system reservations on compute nodes in affected cabinets, then resumed. 
2017-11-03 16:212017-11-03 16:32LDAPLDAP was unresponsive to requests.Several services hung while authentication was unavailable. LDAP services was killed and restarted. 
2017-11-02 09:002017-11-02 16:00LSSTLSST had a GPFS server that was down and had failed over to the other server for NFS.The GPFS client’s failed over automatically, and we manually failed over the NFS in the morning.NFS exports were moved to an independent server. IBM was at NCSA and is continuing to debug the problems. 
2017-10-31 17:112017-11-01 11:13LSSTGPFS degraded/outage

most NCSA-hosted LSST resources experienced degraded GPFS performance

hosts with native mounts (PDAC) experienced an outage

A deadlock at 17:11 yesterday temporarily caused slow performance. Then one GPFS server went offline at 18:21 and services failed over. NFS mounts (qserv/sui) were reported as hanging by a user at 09:12 today but may have been degraded over night. Affected nodes were rebooted and NFS mounts recovered by 11:13. IBM is onsite diagnosing issues with the GPFS system and ordering repairs (including a network card on one server). 
2017-10-31 15:302017-10-31 16:00LSSTGPFS outage

most NCSA-hosted LSST resources

native mounts (e.g., lsst-dev01, verify-worker*) and NFS mounts (e.g., PDAC)

All disks in the GPFS storage system went offline temporarily and came back online by themselves. NFS services were restarted. Client nodes all recovered their mounts on their own. Logs have been sent to the vendor for analysis. 
2017-10-31 - 13:302017-10-31 - 14:30CA Firewall Upgrade Part 1the stand-by, "B" unit, NCSA Certificate Service Firewall will be upgraded and then traffic redirected through it for load testing before the "A" unit is upgradedExpect no impact to servicesUpgrade completed successfully. Some states were reset when traffic switched to the B unit. 
2017-10-30 18:362017-10-31 00:46LSSTGPFS outage

most NCSA-hosted LSST resources

native mounts (e.g., lsst-dev01, verify-worker*) and NFS mounts (e.g., PDAC)

GPFS servers were rebooted. lsst-dev01 and most of the qserv-db nodes were also rebooted. Native GPFS and NFS mounts were recovered. May have been (unintentionally) caused by user processes but will continue to investigate.. 
2017-10-25 22:002017-10-26 11:20LSSTfull/partial GPFS outage

full outage for GPFS during 22:00 hour on 2017-10-25

outage for NFS sharing of GPFS (for qserv, sui) continued through the night

full outage for GPFS recurred 2017-10-26 around 08:44

All GPFS services and mounts have been restored. 
2017-10-26 09:042017-10-26 09:04Various buildings across campus, including NPCF and NCSAIssue with an Ameren line from Mahomet caused a bump/drop/surge in power that lasted 2msLSST had approximately 20 servers at both NPCF and NCSA buildings rebootWas a momentary issue with minimal effect to most systems 
2017-10-26 00:002017-10-26 08:00ICCPgpfs_scratch01 was filled by a very active userAdditional space in scratch wasn't availableOut of cadence purge was run to free 2TB, users jobs held in scheduler; user contacted 
2017-10-25 06:002017-10-25 14:05Blue WatersSecurity Patching of CVE-2017-1000253 security vulnerability.Restricted access to logins, scheduler and compute nodes. HPSS and IE nodes are not affected.System was patched. Logins hosts are made available at 9am. The full system is returned to service at 14:05. 
2017-10-24 09:502017-10-24 20:10LSSTNetwork outage / GPFS outage

All LSST nodes from NCSA 3003 (e.g., lsst-dev01/lsst-dev7) and NCPF (verify-worker, PDAC) that connect to GPFS (as GPFS or NFS) lost their connections.

All LSST nodes at NPCF lost network during network stack troubleshooting and replacement of 3rd bad switch.

A 3rd bad switch was discovered and replaced. All nodes have network and GPFS connectivity once again. 
2017-10-23 08:002017-10-24 05:00Campus ClusterCampus Cluster October maintenance.Total outage of the cluster.Replaced core ethernet switches from share services pod. Run new ethernet cables for share services pod. Moved DES rack from share services pod to ethernet only pod. Deployed new image with patched. 
2017-10-21 17:15

2017-10-23 17:45

LSSTFirst one then two public/protected network switches went down in racks N76, O76 at NPCFMostly qserv-db[11-20] and verify-worker[25-48]; there was also shorter outage for qserv-master01, qserv-dax01, qserv-db[01-10], all of SUI, and the rest of the verify-worker nodes.Two temporary replacement switches were swapped in. Maintenance and/or longer-term replacement switches is being procured for the original switches. 
2017-10-18 13:002017-10-18 14:00NetworkingReplaced a linecard in one of our core switches due to hardware failure.Any downstream switches were routed through the other core switch.All work was completed successfully. 
2017-10-19 08:002017-10-19 21:30LSSTOutage and migration of qserv-master01: provisioning of new hardware, copying of data from old server to new.qserv-master01 (and any services that depend on qserv-master01, which may include services provided by qserv-db*, qserv-dax01, and sui*)

UPDATE (2017-10-19 15:15) OS install took much longer than anticipated, completed at 15:00. Data sync is started. Extending outage till 22:00.


10-19 08:002017-10-19 12:00LSSTRoutine patching and reboots, pfSense firmware updates (NPCF), Dell server firmware updates (NPCF).All NCSA-hosted resources except for Nebula.Maintenance completed successfully. (qserv-master migration is ongoing, see separate status entry) 
2017-10-18 14:452017-10-18 15:35Campus ClusterRestart of resource manager failed after removing all block array jobs.Job submissionOpened case with Adaptive (#25796). Found more array jobs and bad jobs in jobs directories. Removed all of those. 
2017-10-15 08:152017-10-15 08:30Open SourceEmergency upgrade of Atlassian Bamboo.Bamboo will be down for a few minutes during this outage window.Bamboo upgraded to the latest version. 
2017-10-14 22:152017-10-14 23:35Campus ClusterScheduler crashJob submissionOpened case with Adaptive, run diag and uploaded the output along with the core file. Restarted the moab. 
2017-10-14 13:002017-10-14 15:23Campus ClusterResource manager crashJob submissionApplied patch from Adaptive, which help with faster recovery. Suspend/block all current and new array jobs until we have a resolution. 
2017-10-06 09:002017-10-11 01:00NebulaGluster and network issues

1) Gluster sync issues continue from 2017-10-05's Nebula incident.
2) At approximately 2017-10-06 16:10, a Nebula networking issue (unrelated to the Gluster issues) occurred resulting in host network drops within the Nebula infrastructure. This internal networking incident resulted in additional gluster and iscsi issues.
Many instances are broken because iSCSI is broken from the Nebula network issues. And any instances that were broken because of gluster are still broken.

All instances have been restarted and are in a state for admins to run. Some mounted file systems might require a fsk to verify. If there are other issues please send a ticket.

As the file system continues to heal we may see slower interaction.

2017-10-10 16:302017-10-10 19:10Campus ClusterResource manager crashJob submissionAfter removing problematic jobs from queue and we were able to restart the RM. Opened the case with Adaptive and forwarded those job scripts and core files. 
2017-10-05 14:002017-10-05 17:00NebulaGluster sync issuesOne of the gluster storage servers within Nebula had to be restarted.Approximately 100 VM instances experienced IO issues and were restarted. 
2017-10-06 08:002017-10-06 17:00NCSA direct peering with ESnet

A fiber cut between Peoria and Bloomington caused our ESnet direct peering to go down.

All traffic that would have taken the ESnet peering rerouted through our other WAN peers. As such there were no reported outages of connectivity to resources that users would normally access via this peeringThe fiber cut has been repaired and the peering has been re-established. 
2017-10-06 08:002017-10-06 10:00LSSTKernel and package updates to address various security vulnerabilities, including the PIE kernel vulnerability described in CVE-2017-1000253. This will involve an upgrade to CentOS 7.4 and updates to GPFS client software on relevant nodes.All NCSA-hosted LSST resources except for Nebula (incl. LSST-Dev, PDAC, and verification/batch nodes) will be patched and rebooted.Maintenance completed successfully. Pending updates to a couple of management nodes (adm01 and repos01) and one Slurm node that is draining (verify-worker11). 
2017-10-4 07:402017-10-4 09:55Campus ClusterResource Manager crashJob submissionFailure on initial restart attempt. After looking through the core, decided to try a restart again without any change. This time it worked. 
2017-10-03 13:002017-10-03 19:00Campus ClusterResource Manager crashJob submissionAfter removing ~30 problematic jobs from queue and we were able to restart the RM. Opened the case with Adaptive and forwarded those job scripts and core files. 
2017-09-21 02:572017-09-21 09:40Storage server (AFS, iSCSI, web, etc)

The parchment storage server stopped responding on the network.


  • Several websites were down, including the following:,,, etc
  • iSCSI storage mounted to fileserver went offline.
  • Several AFS volumes, including some users' home directories were offline.
Replaced optical transceiver on the machine and networking restarted. Also updated kernel and AFS. 
2017-09-20 08:002017-09-20 13:45Campus ClusterSeptember MaintenanceTotal cluster outageMaintenance completed successfully. 
2017-09-20 08:002017-09-20 11:30 NCSA Storage Condo
Normal maintenance --Firmware upgrade on Netapps so new disk trays could be attached for DSILtotal file system outageThe quarterly maintenance was complete 
2017-09-18 11:202017-09-18 13:30Active Data StorageRAID Failure in NSD server and disk failure on secondary NSD server.ADS service was unavailableRecovered RAID configuration on NSD server and replaced failed disk on secondary NSD. ADS restored. 
2017-09-15 06:202017-09-15 09:28public-linuxOpenAFS storage was not running or mounted after rebooting to a new kernel.AFS storage was not available from this serverReinstalled the dkms-openafs package restarted the openafs-client. AFS is now working as expected. 
2017-09-10 09:452017-09-10 11:30NCSA Open SourceUpgrade of Bamboo, JIRA, Confluence, BitBucket, FishEye, CrowdDuring the upgrade the services will be unavailable for a short amount of time.All services upgraded successfully. 
2017-08-31 11:072017-08-31 11:11NCSA LDAPNCSA LDAP TimeoutsNCSA LDAP was overloaded and timing out. Users were not able to authenticate via NCSA LDAP during that time.NCSA LDAP stopped timing out at 11:11 am and authentication resumed. 
2017-08-28 11:552017-08-28 12:59NCSA GitLabNCSA GitLab server ran out of disk space for the OSThe web interface at wasn't workingWeb interface is now working. Space freed up by clearing CrashPlan caches. 
2017-08-24 13:002017-08-24 14:30netact.ncsa.illinois.eduTransient config issues from some system patching caused apache to not be able to start on the netact serverNetwork Activation The issues were fixed and Network Activation is working again 
2017-08-24 08:002017-08-24 15:30LSSTRack upgrades in NCSA 3003Most LSST Developer services offline during upgradeAll LSST systems are back online with new racks and switches 
2017-08-24 08:002017-08-24 09:30LSSTmonthly maintenance for NPCF (includes patching to address CESA-2017:1789 and CESA-2017:1793)adm01, backup01, bastion01, monitor01, object*, qserv*, sui*, verify-worker*, test0*Maintenance was successfully completed. 
2017-08-23 09:212017-08-23 16;50aForge/iForgegpfs failed during an upgrade of GPFS on the iforge storage nodes.  There was an IB hiccup at the time, but causality is unclearall jobs on iforge were aborted, gpfs clients needed to be upgrade, all gpfs client nodes were rebootediForge went production shortly before 5:12pm.  aforge went "production" at ~1630 
2017-08-22 20:002017-08-22 30:00Patching DHCP servicePatching OS and services on DHCP1.Will need to reboot DHCP server a few times during this process. During the time dhcp will be unavailable. This is during the evening so I don't expect any direct issues from this.Patching has been completed. 
2017-08-16 08:002017-08-16 16:00Campus ClusterAugust MaintenanceScheduler and resource manager downUpgraded Moab 9.1.1 and Torque 6.1.1. 
2017-08-16 08:002017-08-16 09:15NoneReplace Line Card in Core SwitchI believe all systems connected to this switch, are multihomed and will not experience an outage.The line was has been successfully replaced. 
2017-08-16 00:302017-08-16 02:30Blue WatersTwo cabinets (c10 & c11) had EPO due to XDP control valve failure.Scheduler was paused to isolate failing parts, resumed at 2:09.Parts replaced and cabinets were returned to service. 
2017-08-08 7:002017-08-09 3:00iforge/cfdforge/aforge

Update OS image to RH 6.9

Update GPFS to version 4.2.3-2

Redistribute power drops

All four clusters were updated.

All items on checklist completed.

20170808 Maintenance for iforge

2017-08-03 06:452017-08-03 07:35NCSA Jabber upgradeUpgraded Openfire XMMP jabber softwareNCSA Jabber was unavailable during the upgrade.Jabber was upgraded to the latest version of Openfire 
2017-07-28 17:002017-07-31 evening Update - All of the production data has been migrated except for the largest object table. That is loading now, then the user space will be loaded. Should all hopefully be done by this evening. Migration of operational database to new hardware happening during the weekend. DES old operational databasemigration done successfully. Some other maintenance tasks that will give DES additional disk space was done, too and some performance improvements. 
2017-07-27 11:002017-07-28

 The network activation server VM needed to be restored from backup

Network Activation serviceThe service has been fully restored 
2017-07-25 02:362017-07-25 18:00Campus Cluster / Scheduler downBlip on mgmt1 causing GPFS drop and scheduler to crashScheduler offlineStill taking long time for Scheduler to initialize but jobs can start and run as usual. Opened case with Adaptive. 
2017-07-20 09:002017-07-20 17:00ROGER Ambari and OpenStackUpdates to openstack control node and the Ambari clusterAmbari nodes (cg-hm08 - cg-hm18), OpenStack instances and serversOpenstack was back in service on time. Ambari had issues mounting hdfs was held out of service. HDFS was remounted on 25 July 
2017-07-20 06:002017-07-20 10:00All NCSA hosted LSST resourcesMonthly OS patches (addressing issues including CESA-2017:1615 and CESA-2017:1680 ). Roll-out updated puppet modules. Batch nodes updated firmware.All nodes in NCSA 3003 and NPCF (batch nodes) will reboot.Overall success. Exceptions: verify-worker31 failed a firmware update and is out of comission (LSST-914) and there are connectivity issues for some VMs used by the NCSA DM team (IHS-365). adm01, backup01, and test[09-10] will be patched in the near future. 
2017-07-19 08:002017-07-19 14:44Campus ClusterJuly Maintenance (applied security patch)Cluster wide, except mwt2 nodesApplied new kernel, glibc, bind patches and newest NVIDIA driver. 
2017-06-30 0000Blue WatersEmergency maintenance to apply security patch addressing Stack Guard security vulnerability.Compute, Login, Scheduler are offline.Kernel and glibc library patched on all affected system. 
2017-06-22 08002017-06-22 1200All NCSA hosted LSST resourcesCRITICAL kernel and package updates to address Stack Guard Page security vulnerability.

Systems will be patched and rebooted.

Outage was extended to last past 1000 until 1200. Systems were successfully patched as planned except for qserv-db12 and qserv-db27, which will not boot. We will follow up on those with a ticket. 
2017-06-22 08002017-06-22 0930LSST cluster nodes (verify-worker*, qserv*, sui*, bastion01, test*, backup01)Deploy Unbound (local caching DNS resolver)DNS resolving may have a short (~30 mins) delay.  Successfully deployed and all tests (including reverse DNS and intra-cluster SSH) pass. 
BluewatersXDP shutting down causing EPO on cabinet c1-7 and c2-7.Scheduler was paused to isolate the failing components, then resumed.Warmswap of failing components, and returned them to service.  





NCSA Open Source

Security upgrade needed for Bamboo, will also update the following components: Bamboo, JIRA, Confluence, BitBucket, FishEye

Most of the subcomponents of NCSA opensource will be down for a short time when the software is updated.Upgraded Bamboo, JIRA, Confluence, BitBucket, FishEye to latest versions 





ROGER Openstack nfs backend failed and was restartedThe primary CES server for the openstack backend failed and tried to fail over to the secondary server, which also failed. SET was notified and they had the CES nfs service back up by 1100The RoGER openstack dashboard went down and needed a restart. Several VM's experienced "virtual drive errors" and will need to be restartedSET is still investigating the cause of the GPFS CES service failover. CyberGIS is working with their users to get the affected VM's restarted 
2017-06-15 08002017-06-15 0930LSST cluster nodes (verify-worker*, qserv*, sui*, bastion01, test*, backup01)Deploy unboundDNS resolving may have a short (~30 mins) delay.

Updates deployed successfully via new puppet module. All tests passed.

EDIT 2017-06-15 1500 - Reverse DNS not working, which broke ssh to qserv* nodes. Disaabled unbound.



8:00 a.m.


10:00 p.m.

Network Core SwitchNetwork Engineering will be replacing a line card in one of our Core switches due to hardware issue.All services should remain active. Any affected switch will have a second redundant link to the other core to pass traffic.Line card was successfully replaced. 
2017-06-08 12:002017-06-11 22:20Campus Cluster (scheduler paused)Disk Enclosure 3 failure on DDN 10K.Lost redundancy and force us to drain the cluster.Repair/replacement for controller can be time consuming so we took action to rebalance data out of failed enclosure. Scheduler was resumed as of 22:00. 

2017-06-07 12:07

2017-06-07 12:42NCSA LDAPThe NCSA LDAP service crashedNCSA LDAP service was unavailableLDAP software and OS were updated and server rebooted. LDAP is working normally. 
2017-05-31 20:062017-05-31 20:36NCSA LDAPThe NCSA LDAP service was timing outNCSA LDAP service was unavailableThe root cause of LDAP timeouts is still being investigated. 
2017-05-222017-05-26Campus Cluster VMsNetwork issue ESXI (hypervisor) Boxes after maintenanceCould no longer able to login to start VMs. License Server, nagios, all MWT2 VMs were down

The issue is fixed on 5/24. Restored license and Nagios service on 5/24. Moved MWT2 VMs to Campus Farm. All VMs return to service as of noon 5/26.

5/12/20175/18/2017Condo/NFS partitions onlythe NFS partition for the condo became extremely unstable after a replication (normal daily maintenance) was completed. Many iterations with FSCK and IBM on the phone got it resolved, and then 1.5 days restoring files that had been put in Lost and found.UofI library was switched to the READONLY version on the ADS during this timeThe root cause is still being investigated. 
2017-05-23 14:052017-05-23 14:13NCSA LDAPThe NCSA LDAP service was timing outNCSA LDAP service was unavailableThe issue is still being investigated, but seems to be steadily available since the incident. 
2017-05-22 15:412017-05-22
Apache Tomcat out of memoryInCommon/SAML IdP and OIDC authentication services were unavailable.Service restored by failing over to secondary server while memory is being increased on primary server. 
05/20/2017 21:09

05/20/2017 23:37

DES nodes on Campus ClusterCould not communicate outside the switchAll nodes connected to switch in POD22 Rack2 @ACBUpgraded the code on the switch resolved the issue. 
05/20/2017 05:0005/20/2017 21:09Campus Cluster and Active Data Storage (ADS)Total power outage at ACBAll systems currently reside at ACB

Power was restored around 13:00hrs. We rotated ADS rack to align with Campus Cluster Storage Rack. Changed couple of VLAN IDs to reflect campus for future merger. ESXI boxes are down due to a configuration error after reboot. No major issue from output of FSCK from scratch02.

05/17/2017 02:0005/17/2017 10:45Internet2 WAN connectivityIntermittent WAN connectivity. The outage was a result of Tech Services' DWDM system, which provides us with our physical optical path up to Chicago via the ICCN. Specifically, the Adva card that our 100G wave is on was seeing strange errors, which was causing input framing errors for traffic coming in on this interface.General WAN connectivity to XSEDE sites, certain commodity routes, and other I2 AL2S connections.The Adva card was rebooted and we stopped seeing the input framing errors. Tech Services is working with Adva to find the root cause of the issues on the card. 
5/11/20175/12/2017ESnet 100G connectionNCSA and ESnet will be moving their 100G connection to a different location in Chicago.We have several diverse high speed paths to ESnet and DOE, traffic will be redirected to a secondary path.  
NCSA Jabber upgradeUpgraded Openfire XMMP jabber softwareNCSA Jabber was unavailable during the upgrade.Jabber was upgraded to the latest version of Openfire 





iForge, GPFS, License ServersiForge Planned MaintenanceiForge systems, including the ability to submit/run jobs.Pm was completed early at 1815 
2017-05-06 22:002017-05-06 23:00NCSA Open SourceUpgrades of Atlassian softwareNCSA Open Source BitBucketBitBucket is upgraded. 
2017-05-06 09:002017-05-06 10:00NCSA Open SourceUpgrade of Atlassian SoftwareMost services hosted at NCSA Open Source were down for 5 minutes during rolling upgrades.The following services were upgraded: HipChat, Bamboo, JIRA, Confluence, FishEye and CROWD. 
2017-05-05 17:432017-05-05 20:02ITS vSphereA VM node panickedSeveral VMs died when the node panicked and were restarted on other VM nodes. This included LDAP, JIRA, Help/RT, SMTP, Identity, and others.All affected VMs were restarted on other VM nodes. Most restarted automatically. 
2017-04-27 18:102017-04-27 18:55Campus ClusterAnother GPFS interruptionBoth Resource Manager and Scheduler went down along with hand full of compute nodes.Restarted the RM and Scheduler and rebooted all down nodes. 
2017-04-27 13:112017-04-27 14:20Nebulaglusterfs crashed due to this bug, so no instances could access their filesystemsAll instances running on NebulaNeeded to reboot the node that systems were mounting from, but took the opportunity to upgrade all gluster clients on other systems while waiting for a reboot. Version 3.10.1 fixes the bug. All instances with errors in their logs were restarted. 
2017-04-27 11:202017-04-27 12:45Campus ClusterGPFS interruptionBoth Resource Manager and Scheduler went down.Torque serverdb file was corrupted. Restore the file from this morning snapshot and modified the data to match the current state. 
2017-04-26 12:002017-04-26 18:30CondoA bug in the delete of a disk partition from GPFS. a problem within GPFSDES, Condo partitions, and UofI Library.Partitions had been up for 274 days, and many changes. The delete partition bug caused us to stop ALL operations on the condo and repair each disk through GPFS. Must have quarterly maintenance. Just too complicated to go a year without reseting things. 
2017-04-19 16:542017-04-20 08:45gpfs01, iforge

Filled-up metadata disks on I\O servers caused failures on gpfs01.

iforge clusters, including all currently running jobs.

Scheduling on iForge was paused for the duration of the incident. Running jobs were killed.13% metadata space was freed. Clusters were rebooted and scheduling resumed.

2017-04-19 08:002017-04-19 13:00Campus ClusterMerging xpacc data and /usr/local back to data01 (April PM)Resource manager and Scheduler were unavailable during the maintenance.Once again, /usr/local, /projects/xpacc and /home/<xpacc users> are mounting from data01. No more split cluster. 
2017-04-04 (1330)2017-04-04 (1600)NetworkingSome fiber cuts caused a routing loop inside one of the campus ISP's network.Certain traffic that traversed this ISP would never make the final destination. Some DNS lookups would have also failed.Campus was able to route around the problem, and the ISP also corrected their internal problem. The cut fiber was restored last night. 
2017-03-28 (0000)2017-03-29 (1600)LSSTNPCF Chilled Water OutageLSST - Slurm cluster nodes will be offline during the outage. All other LSST systems are expected to remain operational.No issues. Slurm nodes restarted. 
2017-03-28 (0000)2017-03-29 (0230)Blue WatersNPCF Chilled Water OutageFull system shutdown on Blue Waters (except Sonexion which is needed for fsck)FSCK done on all lustre file systems, XDP piping works done (no leakage found), Software updates (PE, darshan) completed. 
Blue WatersBW scratch MDT failover, df hangsBW scratch MDT failover, load on mds was 500+ delayed failover. Post FO had some issues that delayed RTS.scheduler was paused 
Blue WatersBW login node ps hangrebooted h1-h3, lost bw/h2ologin DNS record, had neteng recreate the record. Had to rotate login in and out of round-robins until all rebooted. User email sent (2).Logins nodes rebooted
DNS round-robin changes
2017-03-23 (1000)2017-03-23 (1500)NebulaNCSA Nebula OutageNebula will take an outage to balance and build a more stable setup for the file system. This will require a pause of all instances, and Horizon being unavailable.File system online and stable. At this time all blocks were balanced and healed. 
2017-03-16 (0630)2017-03-16 (1130)LSSTLSST monthly maintenanceGPFS filesystems will go offline for entire duration of outages. Some systems may be rebooted, especially those that mount one or more of the GPFS filesystems.  
Blue WatersFailure on cabinet c9-7, affecting HSN.Filesystem hung for several minutes.Scheduler was paused for 50 minutes.
Warmswap cabinet c9-7.
Nodes on c9-7 are reserved for further diagnosis.  
2017-03-15 09:002017-03-15 12:47Campus ClusterUPS work at ACB.Reshuffling electrical drops on 10k controllers, storage IB switches and some servers.Scheduler will be paused for regular jobs. MWT2 and DES will continue run on their nodes.UPS work at ACB - incomplete (required additional parts)Redistributing power work done.Scheduler was paused for 3hrs 50 mins. 
2017-03-10 13:002017-03-10 18:00Campus ClusterICCP - We lost 10K controllers due to some type of power disturbance at ACB.ICCP - Lost all filesystem and its a cluster wide outage.Recovered missing LUNs and rebooted the cluster. Cluster was back in service at 18:00. 
2017-03-09 09002017-03-09 1500RogerROGER planned PMbatch, hadoop, data transfer services & Ambarisystem out for 6hrs, DT services out until 0000 
2017-03-08 19:412017-03-08 22:41Blue WatersXDP powered off that served the four cabinets
(c16-10, c17-10, c18-10, c19-10).
scheduler paused, four rack power cycled.
moab required a restart, too many down nodes
and itterations were stuck.
Scheduler paused
three hours
2017-03-03 17002017-03-03 2200Blue WatersBW hpss emergency outage to clean
up db2 database
ncsa#nearline, stores are failing with cache fullResolved cache full errors 
2017-02-28 12002017-02-28 1250Campus ClusterICC Resource Manager downUser can't submit new jobs or start new jobsRemove corrupted job file 
2017-02-22 16152017-02-221815NebulaNebula Gluster IssuesAll Nebula instances paused while gluster repairedNebula is available. 
2017-02-11 19002017-02-11 2359NPCFNPCF Power HitBW Lustre was down, xdp heat issues.RTS 2017-02-11 2359 
2017-02-15 08002017-02-15 1800Campus ClusterICC Scheduled PMBatch jobs and login nodes access