Snapshot backups of EVERYTHING using rsync (including Windows!)

Tags: , , , , , , , ,

Just a bunch of disksLet me just start by saying that I have a lot of data. In multiple places. Some on laptops, some on servers, some on removable drives and mirrored hard disks sitting in a bank vault (yes, really). Lots of data on lots of systems in different states and locations: client data, personal data, work data, community data and lots more.

Over the years, I’ve tried my best to unify where that data is sourced from, for example by relocating the standard “My Documents” location on all of my Windows machines (physical and virtual), to point to a Samba share that is served up by a GELI-encrypted volume on my FreeBSD or Linux servers. That part works well, so far, but that’s only a small piece of the larger puzzle.

Over the last decade, the amount of data I’m holding and responsible for managing has grown significantly, and I needed a better way to manage it all.

There are plenty of backup solutions for Linux including the popular Amanda and Bacula, but I needed something more portable, leaner and much more efficient. That quest led me to look to find Unison mostly due to it’s cross-platform support, but it was still a bit more complicated than I needed.

So I kept looking and eventually found rsnapshot, a Perl-based tool wrapped around the standard rsync utility written by Andrew Tridgell.

Since I’d already been using rsync quite a bit over the last 10 years or so to copy data around as I needed it and to perform nightly full backups of my remote servers, I decided to look into using rsync to manage a new backup solution based around incremental backups as well as full backups.

I’m already using rsync to pull a couple of terabytes of mirrored data to my servers on a nightly basis. I’m mirroring CPAN, FreeBSD, Project Gutenberg, Cygwin, Wikipedia and several other key projects, so this was a natural graft onto my existing environment.

Read the rest of this entry »

Upgrading that backup drive!

Tags: , ,

A couple of years ago, I purchased a Western Digital external combo drive to back up my laptops and a couple of the critical servers here. It was also partitioned for holding the digital images we take with our Minolta DiMAGE 7Hi. It was only a mere 120gb of capacity, but it lasted for quite a long time… but it was time to upgrade it.

The enclosure has two interfaces: usb2.0 and Firewire 400 (1394a). It works great, and has served me well for the couple of years I’ve had it. No complaints at all with it.

I recently went out and bought two Maxtor MaxLine Plus II 250gb drives; one for the main server, and one to replace the 120gb drive in the WD enclosure.

The upgrade of the external enclosure’s drive went pretty smoothly (full details of the disassembly), and recognizing the new drive went smoothly. I proceeded to back up 3 of the servers here to the drive, including making a duplicate copy of what was on the 120gb WD onto this new 250gb drive. I made sure to verify the backups to be sure things were intact. I’ve had a LOT of bad luck with storage and computer peripherals in general, so I was taking no chances.

The other drive went into the main server here, and that wasn’t so easy. I did an rsync of the existing running data to the Maxtor while installed in the primary slave location. So far, so good. I wanted to chroot to that drive’s mountpoint and just re-run lilo to create a working mbr on the slave, but that didn’t work so well.

Ok, second plan: switch the drives, boot the server to KNOPPIX and chroot from there, and run lilo. Nope, of course not. My KNOPPIX disks, which I use almost weekly were all no longer recognized in the CDROM drive in the server. In fact NO cdrom was recognized in that drive. Arg!

So I had to put the original drive back in as slave, switch the bios to allow me to boot to that second drive, and then re-ran lilo from there, which put the right mbr on the master. Whew. A few hiccups with some startup scripts, and I was back in business. The drive is pushing about 1gb/sec. over cache, and 49mb/sec. over disk reads. Not bad at all.

Once I wiped the servers after doing the backup, I stupidly decided to try to defrag the ext partition. It was ext3, so e2defrag barfed on it. I used tune2fs to take off the has_journal and dir_index bits from the drive metadata, and tried again.

This time it got as far as calculating the inode indices, then crashed. Ut oh. I ran e2fsck on the drive, and it segfaulted about 70% into the process. Double-ut-oh! I ran it several times, all segfaulting in the same place. Running it under gdb produced the following barf:

     0xb7fcf45b in ext2fs_unmark_generic_bitmap () from /lib/libext2fs.so.2

Rut-roh! So I decided to yank all of the data off of the backup drive onto other systems with enough free space to hold it, and reformatted it to XFS instead. After restoring the data across, all seems well.

Whew!

Bad Behavior has blocked 1656 access attempts in the last 7 days.