X-Git-Url: https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?p=finddup.git;a=blobdiff_plain;f=finddup.1;h=2da4ee3dc12e5b24d40182090bd9cfca0d4e2943;hp=46a4326fb9f2e1244aef1bfe9a93588f43505880;hb=8c988a4aca00501c9a9d53f4ff228dcb0bce0acb;hpb=e4133d06373b48e8509afd0811bb0a726d74f8a8 diff --git a/finddup.1 b/finddup.1 index 46a4326..2da4ee3 100644 --- a/finddup.1 +++ b/finddup.1 @@ -10,12 +10,13 @@ finddup \- Find files common to two directories (or not) .SH "SYNOPSIS" -\fBfinddup\fP [OPTION]... DIR1 [[and:|not:]DIR2] +\fBfinddup\fP [OPTION]... [DIR1 [[and:|not:]DIR2]] .SH "DESCRIPTION" -With a single directory argument, \fBfinddup\fP prints the duplicated -files found in it. +With one directory as argument, \fBfinddup\fP prints the duplicated +files found in it. If no directory is provided, it uses the current +one as default. With two directories, it prints either the files common to both DIR1 and DIR2 or, with the `not:' prefix, the ones present in DIR1 and not @@ -61,35 +62,29 @@ show the real path of the files .TP \fB-i\fR, \fB--same-inodes-are-different\fR files with same inode are considered as different -.TP -\fB-m\fR, \fB--md5\fR -use MD5 hashing .SH "BUGS" None known, probably many. Valgrind does not complain though. -The MD5 hashing is not satisfactory. It is computed for a file only if -the said file has to be read fully for a comparison (i.e. two files -match and we have to read them completely). - -Hence, in practice lot of partial MD5s are computed, which costs a lot -of cpu and is useless. This often hurts more than it helps, hence it -is off by default. The only case when it should really be useful is -when you have plenty of different files of same size, and lot of -similar ones, which does not happen often. +The current algorithm is dumb, as it does not use any hashing of the +file content. -Forcing the files to be read fully so that the MD5s are properly -computed is not okay neither, since it would fully read certain files, -even if we will never need their MD5s. +Here are the things I tried, which did not help at all: (1) Computing +md5s on the whole files, which is not satisfactory because files are +often never read entirely hence the md5s can not be properly computed, +(2) computing XOR of the first 4, 16 and 256 bytes with rejection as +soon as one does not match, (3) reading parts of the files of +increasing sizes so that rejection could be done with a small fraction +when possible, (4) using mmap instead of open/read. .SH "WISH LIST" The format of the output should definitely be improved. Not clear how. Their could be some fancy option to link two instances of the command -running on different machines to reduce network disk accesses. Again, -this may not help much, for the reason given above. +running on different machines to reduce network disk accesses. This +may not help much though. .SH "EXAMPLES"