X-Git-Url: https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?p=finddup.git;a=blobdiff_plain;f=finddup.1;h=2da4ee3dc12e5b24d40182090bd9cfca0d4e2943;hp=691e910cd1ba52bb3787fa4f4648d6ed6b22cdbc;hb=8c988a4aca00501c9a9d53f4ff228dcb0bce0acb;hpb=f7bb05fb425e5ff6f1e9105cd36309786963825d diff --git a/finddup.1 b/finddup.1 index 691e910..2da4ee3 100644 --- a/finddup.1 +++ b/finddup.1 @@ -10,22 +10,25 @@ finddup \- Find files common to two directories (or not) .SH "SYNOPSIS" -\fBfinddup\fP [OPTION]... DIR1 [[and:|not:]DIR2] +\fBfinddup\fP [OPTION]... [DIR1 [[and:|not:]DIR2]] .SH "DESCRIPTION" -With a single directory argument, \fBfinddup\fP prints the duplicated -files found in it. With two directories, it prints either the files -common to both DIR1 and DIR2, or with the `not:' prefix, the ones -present in DIR1 and not in DIR2. The and: prefix is assumed by default -and necessary only if you have a directory name starting with `not:'. +With one directory as argument, \fBfinddup\fP prints the duplicated +files found in it. If no directory is provided, it uses the current +one as default. + +With two directories, it prints either the files common to both DIR1 +and DIR2 or, with the `not:' prefix, the ones present in DIR1 and not +in DIR2. The and: prefix is assumed by default and necessary only if +you have a directory name starting with `not:'. This command compares files by first comparing their sizes, hence goes reasonably fast. -When looking for identical files, \fBfinddup\fP associates by default -a group ID to every content, and prints it along the file names. Use -the \fB-g\fP to switch it off. +When looking for identical files, \fBfinddup\fP associates a group ID +to every content, and prints it along the file names. Use the \fB-g\fP +to switch it off. Note that .B finddup DIR @@ -64,15 +67,24 @@ files with same inode are considered as different None known, probably many. Valgrind does not complain though. +The current algorithm is dumb, as it does not use any hashing of the +file content. + +Here are the things I tried, which did not help at all: (1) Computing +md5s on the whole files, which is not satisfactory because files are +often never read entirely hence the md5s can not be properly computed, +(2) computing XOR of the first 4, 16 and 256 bytes with rejection as +soon as one does not match, (3) reading parts of the files of +increasing sizes so that rejection could be done with a small fraction +when possible, (4) using mmap instead of open/read. + .SH "WISH LIST" The format of the output should definitely be improved. Not clear how. -The comparison algorithm could definitely use some MD5 kind of -signature. However, I doubt it would improve speed much. - -Their should be some fancy option to link two instances of the command -running on different machines to reduce network disk accesses. +Their could be some fancy option to link two instances of the command +running on different machines to reduce network disk accesses. This +may not help much though. .SH "EXAMPLES" @@ -80,7 +92,8 @@ running on different machines to reduce network disk accesses. .fi List duplicated files in directory ./blah/, show a progress bar, -ignore empty files and files and directories starting with a dot. +ignore empty files, and ignore files and directories starting with a +dot. .P .B finddup sources not:/mnt/backup @@ -90,11 +103,12 @@ List all files found in \fB./sources/\fR which do not have content-matching equivalent in \fB/mnt/backup/\fR. .P -.B finddup tralala cuicui +.B finddup -g tralala cuicui .fi List groups of files with same content which exist both in -\fB./tralala/\fR and \fB./cuicui/\fR. +\fB./tralala/\fR and \fB./cuicui/\fR. Do not show group IDs, instead +write an empty lines between groups of files of same content. .SH "AUTHOR"