As is usually the way with these things, a year-end upgrade was planned out for our full-disk encryption software to upgrade it to the latest version.  In a perfect world, it would uninstall the old version and install the new version.  It was tested on several machines in our headquarters only, with no adverse effects.  A gradual rollout was scheduled to be done AFTER the encryption administrator came back from a short vacation.

The day after the administrator left on vacation, we started to get laptops coming in from across North America with the new version of encryption on it.  The machines themselves were being shipped to headquarters for reimages due to various issues, but not the encryption itself.  We groaned to each other, because this is par for the course for us, and made ourselves a note to say unfriendly words to our administrator when he came back.

Our administrator returned from vacation, and admitted that SOMEHOW the encryption upgrade had been pushed to 3500 machines outside of headquarters.  He assured us that he would halt the rollout, and that machines that hadn’t begun to install the new version would not be upgraded.

Several days later, we started to see machines come in with new version of encryption installed, but with a twist.  The encryption software would not boot.  We took one over to our administrator, who tinkered with it, concluded that he couldn’t do anything with it, and that we should reimage it.  This lasted for several hours, until we noticed that the tickets indicating a machine inbound for reimage had ballooned, all reporting the same fatal encryption error.  We quickly realized that we would be receiving several hundred of these machines in the space of a few days, when our normal volume for reimages is only 10-20 machines per day.

Cue the panic.

We bluntly informed the powers-that-be that we had neither the physical space to stage several hundred machines, the manpower to reimage several hundred machines (at least, not quickly), or the server space to perform data backups (as we usually do whenever we reimage).

The powers-that-be quickly responded, getting us several conference rooms set up for reimaging, and getting more than a dozen IT employees to volunteer for some crash-course training in reimaging.

However, the server space turned out to be a problem.  We had at our disposal a PowerEdge with a 1.3 TB RAID array and a normal desktop computer with a 1 TB hard drive.  Normally, we could make this last for about two weeks before we had to start deleting old data to make room for new data.  However, our Security group firmly insisted that we needed to retain all the data from machines afflicted with this encryption error.

The other IT employees immediately began plumbing the depths of whatever resources they had.  Someone offered to archive old data on their group’s network share, another person donated a legacy server and configured it with a 900 GB RAID array.  However, we knew (and told the powers-that-be) that we needed way more than that.

Amidst everyone else running around like beheaded poultry, our chief security officer (who happens to be one of the developers of BackTrack) walked up to me and asked about how much space was actually needed.  I ballparked high and told him 20 TB.  He nodded, grabbed his coat, and walked out.

Thirty minutes later, he returned from Best Buy laden down with bags full of 1 TB external hard drives.  He had driven to the nearest Best Buy, walked up to the customer service counter, and casually purchased their entire stock of them.

And he bought a coffee maker.

AdvertisementAcronis