Upgrading that backup drive!
Tags: Backups, linux, servers
A couple of years ago, I purchased a Western Digital external combo drive to back up my laptops and a couple of the critical servers here. It was also partitioned for holding the digital images we take with our Minolta DiMAGE 7Hi. It was only a mere 120gb of capacity, but it lasted for quite a long time… but it was time to upgrade it.
The enclosure has two interfaces: usb2.0 and Firewire 400 (1394a). It works great, and has served me well for the couple of years I’ve had it. No complaints at all with it.
I recently went out and bought two Maxtor MaxLine Plus II 250gb drives; one for the main server, and one to replace the 120gb drive in the WD enclosure.
The upgrade of the external enclosure’s drive went pretty smoothly (full details of the disassembly), and recognizing the new drive went smoothly. I proceeded to back up 3 of the servers here to the drive, including making a duplicate copy of what was on the 120gb WD onto this new 250gb drive. I made sure to verify the backups to be sure things were intact. I’ve had a LOT of bad luck with storage and computer peripherals in general, so I was taking no chances.
The other drive went into the main server here, and that wasn’t so easy. I did an rsync of the existing running data to the Maxtor while installed in the primary slave location. So far, so good. I wanted to chroot to that drive’s mountpoint and just re-run lilo to create a working mbr on the slave, but that didn’t work so well.
Ok, second plan: switch the drives, boot the server to KNOPPIX and chroot from there, and run lilo. Nope, of course not. My KNOPPIX disks, which I use almost weekly were all no longer recognized in the CDROM drive in the server. In fact NO cdrom was recognized in that drive. Arg!
So I had to put the original drive back in as slave, switch the bios to allow me to boot to that second drive, and then re-ran lilo from there, which put the right mbr on the master. Whew. A few hiccups with some startup scripts, and I was back in business. The drive is pushing about 1gb/sec. over cache, and 49mb/sec. over disk reads. Not bad at all.
Once I wiped the servers after doing the backup, I stupidly decided to try to defrag the ext partition. It was ext3, so e2defrag barfed on it. I used tune2fs to take off the has_journal and dir_index bits from the drive metadata, and tried again.
This time it got as far as calculating the inode indices, then crashed. Ut oh. I ran e2fsck on the drive, and it segfaulted about 70% into the process. Double-ut-oh! I ran it several times, all segfaulting in the same place. Running it under gdb produced the following barf:
0xb7fcf45b in ext2fs_unmark_generic_bitmap () from /lib/libext2fs.so.2
Rut-roh! So I decided to yank all of the data off of the backup drive onto other systems with enough free space to hold it, and reformatted it to XFS instead. After restoring the data across, all seems well.
Whew!