Incident review: Outage vm3

Published 2014-05-26 by Jochen Lillich

On Thursday, 15 May, one of our VM hosts named “vm3” did not return back to operation after a standard maintenance procedure, resulting in an outage of more than 14 hours. While we were able to restore all affected DrupalCONCEPT POWER servers, we only had backups available that were more than 24 hours old. And in the case of a custom-built managed server, we even lost most of its files completely.

We regard reliability and effective IT processes as essential for our business. An outage of this duration and with these results is not acceptable. We are embarassed and deeply sorry about this incident and I apologize on behalf of freistil IT to all customers that we disappointed.

In this review, I’d like to give you detailed insight into what’s happened and what we’re going to do to prevent incidents like this in the future.

What’s happenend

On Monday, 12 May, the VM host vm3 signaled one of two disks of its RAID–1 array as failed. It kept running on the second disk without any problems. We scheduled a maintenance window to have the failed disk replaced for Thursday, 15 May at 19:00 UTC, and announced the scheduled maintenance on the freistilbox Status Page.

Data center staff shut down the server at 18:55 UTC (a few minutes early) and replaced the broken disk. After restarting the server, we found that the server would not boot into a working system again. It turned out that there was no bootable operating system available on the remaining disk any more, which suggested that this disk had failed, too. When we realised that there was nothing we could do about the second failed disk, we decided to go the only viable, albeit laborious, way of rebuilding the server from scratch. After getting the second disk replaced, we started reinstalling the server OS, then the host environment and finally, the guest servers.

When we started the restore process, we realised that already the first phase, building a directory tree of the data to restore, would take several hours. We hoped that it would finish over night, but after 7 hours on Friday morning, the backup database was still working on collecting data for the restore directory tree. Fortunately, we found out by experimenting that by aborting the slow query on the database server, we could force the backup system to fall back to doing a full restore of all files in the backup set.

After the restore jobs were finished on all affected servers, we started reimporting the database dumps that were included in the backups. That’s when we found that we had timed the creation of these dumps badly: The job for doing daily database dumps actually ran later than the file backup that was supposed to pick them up. Restoring data from the Wednesday night backup meant that we had lost almost a whole day of data but the backup then only contained database backups from Tuesday night.

And as if this wasn’t bad enough news for our customers already, it turned out that one of the affected servers didn’t have any of its websites backed up at all. The respective server is a custom-built managed server. While with DrupalCONCEPT and freistilbox servers, everything (including the backup) is configured automatically, this server would have needed a manual backup configuration and we obviously had forgotten this part during setup.

Some customers had newer backups available that we were able to copy back to their server but in the end, most of them still suffered a catastrophic loss of data.

On Friday at about 11:30 UTC, all servers were online again. We then spent the rest of the day with assisting our affected customers to solve some minor remaining issues.

What we are going to do about it

In a post mortem meeting on Monday, 19 May, we discussed the incident and decided on remediation measures to prevent it from repeating.

The root cause of the incident, the loss of both disks of a RAID–1 array, is a rare event but we need to be prepared for it to occur. We especially need to minimise the amount of data lost due to such a failure.

While the affected customers had consciously chosen a one-server setup that has many single points of failure (SPOF), neither they nor we had expected that an outage would take this long and would result in such catastrophic data loss. We need to make sure that all our backups have complete coverage and that they can be restored within a reasonable amount of time (a few hours max).

As a result of our post mortem, we decided on the following remedial measures:

  • We checked to make sure that all customer data, especially on custom-built servers, will be fully backed up from now on.
  • We rescheduled our file backup in order to include the latest database dumps.
  • Planned maintenance must be done right after a backup run. We will either schedule it after the regular daily backup job or we’ll trigger an extra backup in advance of the maintenance.
  • We’ll schedule regular disaster recovery exercises where we take production backups and restore them to a spare server.
  • We’ll research how we can speed up the restore process. This could mean improvements to specific components or even switching to a different backup system altogether.
  • If customers need shorter backup periods than 24 hours, we’ll support them in setting up custom backup jobs directly from their content management system.

In conclusion, I’d like to state that this incident showed an embarassing lack of preparation on our side for the failure of a whole disk array. I apologize to all affected customers that we were not able to restore normal operation more quickly and to the full extent. I assure you that we are working hard to prevent an incident like this from ever happening again.

Previous

Index

Next