On Mon Jan 21, 2008 at 15:46:26 -0500, Andrew Ferguson wrote:
>Oliver Hookins wrote:
>> rdiff-backup has options to exclude device files, FIFOs, other filesystems
>> etc but what we'd find really useful is being able to exclude any
>> filesystems that are bind mounted, cdroms (i.e. iso9660), or loop-mounted.
>> Looking through Python docs I can't see any modules or any nice ways to do
>> this programmatically.
>> One possibility would be to parse /proc/mounts or
>> other files where mount information is kept and check the paths of the files
>> that are to be backed up but this seems hacky. Does anyone else have this
>There's the --exclude-other-filesystems option. Scripted properly, that
>might achieve what you want.
>Parsing /proc/mounts is non-portable and susceptible to change. You
>should do that in your backup script and generate --include/--exclude
>If you are on BSD, you can use the patch implementing a
>--include/exclude-mount-type option (which I think is what you want). It
>uses a BSD specific field in statfs to make it very easy to implement.
>Since it is non-portable, you can get it from the Wiki:
Unfortunately we're on Linux. At the moment we're generating the exclude
list by hand with a small script but this "exclude-mount-type" option sounds
like exactly what we want. It definitely can't be done on Linux?
>> Also we are backing up a number of machines with very many small files (say,
>> 12 million or so). This results in lots of statfs() operations on these
>> files and the whole backup is very slow (mainly due to the server which has
>> to run a number of backups simultaneously). I thought perhaps making more use
>> of the inode cache would speed this up, but is there a better way of having
>> these backups that need to compare mtimes of lots of files make use of disk
>> or RAM?
>I'm not sure what you're talking about here. rdiff-backup doesn't make
>any statfs() calls.
Well, stat() or whatever it uses... I'm sure it grabs file information in
some way in order to determine the mtime on each file in the source and the
destination. Our problem is that to cache this information for our 12
million files would take some 20GB of memory, which we don't and never will
have in this system. Are there any other improvements or tweaks we could
make to help the speed along for the mtime comparisons of such a large
amount of files?
rdiff-backup-users mailing list at rdiff-backup-users@xxxxxxxxxx
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki