Archive for the 'Storage' Category
HOWTO: Backup your Android device over WiFi with rsync (NO root needed)
I stumbled upon this recently and wanted to add it to the list of other HOWTOs I have on using rsync to back up your device or machine. I’m a BIG fan of rsync, and use it all the time for backing up Linux, BSD, Windows machines, tablets and pretty much everything.
Now, it’s drop-dead simple to back up your Android device over wireless, using an rsync app on the device, and talking to any rsyncd you’ve got set up to receive that data. The setup isn’t terribly intuitive using this specific app, but it does work really well.
I found an app called “Syncopoli” through F-Droid (a free alternative to the Google Play Store), and installed it as a means to try to back up my non-rooted Android device. On my rooted Android devices, I use Titanium Backup Pro, which works fabulously. Without root, you’re very limited in the permissions you have available to read and copy data onto and off of the device.
So, install Syncopoli, and then click the 3 dot menu in the upper-right corner of the app to access the “Settings” pane. In here, you’ll want to set your rsync username, password, any relevant keys or other data you need, and also the IP address of the rsync server. Since my rsync server resides on the same side of the LAN, I also checked the box to back up over WiFi only.
I’ll play with setting up a secure tunnel later, so I can back up to my rsync server over the live Internet.
Once the global settings are correctly configured for your environment, click the (+) sign in the lower-right corner to set up a sync profile. Here, you’ll specify what and where you want your data backed up.
Here’s where things get a little tricky. Initially, I thought it was looking for the IP of the Origin and Destination, but it’s actually looking for a path on the Origin side, and an rsync module on the Destination side. It’s not terribly intuitive, but after I figured that out and reconfigured my rsync server with a specific module (stanza) in rsyncd.conf to allow the Android to back up to, it started working.
To go back in and edit your profile, long-press the profile name and choose “Edit Profile” from the popup dialog.
I also went back into the global Settings and set some optimized rsync options (-avP –inplace –partial), and then clicked the “|>” play button in the upper-right to kick off the rsync job. Tapping on the profile after starting the job, I could then see the log on the client end of my Android device. It was these logs which helped me debug the errors initially with the connection and Origin/Destination paths being incorrect.
So my Origin ended up being /storage/extSdCard (my external media card in my Android device) and the destination was my “Android” module on the server. As I type this blog post, it’s pulled over 1,277 files and counting.
Quick, easy, fantastic and you don’t need root!
Using fdupes to Solve the Data Duplication Problem: I’ve got some dupes!
Well, 11.6 hours later after scanning the NAS with fdupes, I noticed that I’ve got some dupes across my system backups.
# time ./fdupes -R -Sm "/nas/Backups/System Backups/" 2153352 duplicate files (in 717685 sets), occupying 102224.5 megabytes real 698m15.606s user 38m20.758s sys 92m17.217s
That’s 2.1 million duplicate files occupying about 100GB of storage capacity in my backups folder on the NAS. DOH!
Now the real work begins, making sense of what needs to stay and what needs to get tossed in here.
UPDATE: I may give up on fsdupes altogether, and jump to rmlint instead. rmlint is significantly faster, and has more features and functions. Here’s a sample of the output:
# rmlint -t12 -v6 -KY -o "/nas/Backups/System Backups/" Now scanning "/nas/Backups/System Backups/".. done. Now in total 3716761 useable file(s) in cache. Now mergesorting list based on filesize... done. Now finding easy lint... Now attempting to find duplicates. This may take a while... Now removing files with unique sizes from list...109783 item(s) less in list. Now removing 3917500 empty files / bad links / junk names from list... Now sorting groups based on their location on the drive... done. Now doing fingerprints and full checksums.. Now calculation finished.. now writing end of log... => In total 3716761 files, whereof 1664491 are duplicate(s) => In total 77.66 GB [83382805000 Bytes] can be removed without dataloss.