Before backing up the SD Card of Raspberry Pi, do clean up the system first to remove unneeded files such as package source codes and apt-get caches. You might be surprized by how much apt-get is keeping cache of downloaded packages. It was nearly 600MB for me. Cleaning this up would give you smaller backup image.
df -h sudo apt-get clean df -h
See how much space you free’d after cleaning apt-get’s cache.
I like to keep a piece of information, that tells me that the system is in the state of just being restored. I do that by either editing the MOTD (Message of The Day) or making an empty file on my home folder “THIS_IS_JUST_RESTORED”.
To back up the SD Card, first plug the SD Card into the SD Card reader of your laptop/PC that runs Linux. I use Ubuntu 12.04 Precise Pangolin. Then get any files that you need to copy, after that unmount the SD Card. Default Raspbian installation has 2 partitions made on the SD Card, so do not forget to unmount both. To check which folder is mounted to which device, use
df -h and see the lines that start with
/dev/mmcblk0p2. Then unmount the corresponding folders:
sudo umount /media/first_sd_partition sudo umount /media/second_sd_partition
To back up the SD card, what we are about to do is making an image of the device, in this case both of the SD card partitions. For Linux, we have dd. I personally use
dcfldd, an enhanced version of dd. It gives progress percentage, and also multiple on-the-fly hash calculation. To install dcfldd, in ubuntu simply:
sudo apt-get install dcfldd
If you want the backed up image to be as small as possible, you have to compress the image of the SD card. In Linux it is possible to compress an image which is currently being backed up (on-the-fly compression). With this you can save your hard disk space, because if you’re making an image without on-the-fly compression, you will require at least the same size as the SD card. If your have 8GB SD card, although only used partially, the resulting image from dd or dcfldd will be around 8GB in size.
To backup with on-the-fly compression:
sudo dcfldd bs=4M if=/dev/mmcblk0 hash=sha1 hashlog=./image.img.sha1sum | gzip > image.img.gz sudo sync
And we’re done.
Everytime after you run
dcfldd, do not forget to call
sudo sync to flush system buffers. I forgot to do this once and my system goes all crazy and weird.
If you want to do it the long way, you can do backup without compression:
sudo dcfldd bs=4M if=/dev/mmcblk0 of=./image.img hash=sha1 hashlog=./image.img.sha1sum sudo sync
Then if you realize that it takes too much space, you can compress it anytime with
Please note that it will take a while, since we’re dealing with gigabytes of data.
I also like to watch the image file grow in real time, so I do this:
watch ls -lh image.img*
If you have your image file already compressed, you can either uncompress and then restore, or uncompress while restore.
I find uncompressing and restoring in the same time (on-the-fly) is the most efficient way:
gunzip -c image.img.gz | sudo dcfldd bs=4M of=/dev/mmcblk0 hash=sha1 hashlog=./hash.log sudo sync
Again, do not forget to flush the system buffer by calling
This hash thing, is for making sure that the backed up and restored image are consistent, not broken in any way which might happen during transfer/copying or upload. After the process finished, compare both of the hashes (
hash.log) and make sure they are identical. You don’t need to read both files manually, just tell linux to do it:
diff -sq image.img.sha1sum hash.log
This will tell you either both files are identical or not.
If you are bored and have a lot of free space in your hard disk, you can do the restoring process in the long way by first uncompressing the image, then write it to the SD card:
gunzip image.img.gz sudo dcfldd bs=4M if=./image.img of=/dev/mmcblk0 hash=sha1 hashlog=./hash.log sudo sync