-.TH "FINDDUP" 1 "Mar 2010" "Francois Fleuret" "User Commands"
+.TH "FINDDUP" "1.1" "Apr 2010" "Francois Fleuret" "User Commands"
\" This man page was written by Francois Fleuret <francois@fleuret.org>
\" and is distributed under a Creative Commons Attribution-Share Alike
.SH "SYNOPSIS"
-\fBfinddup\fP [OPTION]... DIR1 [[and:|not:]DIR2]
+\fBfinddup\fP [OPTION]... [DIR1 [[and:|not:]DIR2]]
.SH "DESCRIPTION"
-With a single directory argument, \fBfinddup\fP prints the duplicated
-files found in it.
+With one directory as argument, \fBfinddup\fP prints the duplicated
+files found in it. If no directory is provided, it uses the current
+one as default.
With two directories, it prints either the files common to both DIR1
and DIR2 or, with the `not:' prefix, the ones present in DIR1 and not
-in DIR2. The and: prefix is assumed by default and necessary only if
+in DIR2. The `and:' prefix is assumed by default and necessary only if
you have a directory name starting with `not:'.
This command compares files by first comparing their sizes, hence goes
ignore empty files
.TP
\fB-c\fR, \fB--hide-matchings\fR
-do not show which files from DIR2 corresponds to files from DIR1
+do not show which files from DIR2 correspond to files from DIR1
(hence, show only the files from DIR1 which have an identical twin in
DIR2)
.TP
\fB-g\fR, \fB--no-group-ids\fR
do not show the file group IDs
.TP
+\fB-t\fR, \fB--time-sort\fR
+sort files in each group according to the modification times
+.TP
\fB-p\fR, \fB--show-progress\fR
show progress information in stderr
.TP
.TP
\fB-i\fR, \fB--same-inodes-are-different\fR
files with same inode are considered as different
-.TP
-\fB-m\fR, \fB--md5\fR
-use MD5 hashing
.SH "BUGS"
None known, probably many. Valgrind does not complain though.
-The MD5 hashing often hurts more than it helps, hence it is off by
-default. The only case when it should really be useful is when you
-have plenty of different files of same size, which does not happen
-often.
+Since files with same inodes are considered as different when looking
+for duplicates in a single directory, there are weird behaviors -- not
+bugs -- with hard links.
+
+The current algorithm is dumb, as it does not use any hashing of the
+file content.
+
+Here are the things I tried, which did not help at all: (1) Computing
+md5s on the whole files, which is not satisfactory because files are
+often not read entirely, hence the md5s can not be properly computed,
+(2) computing XORs of the first 4, 16 and 256 bytes with rejection as
+soon as one does not match, (3) reading files in parts of increasing
+sizes so that rejection could be done with only a small fraction read
+when possible, (4) using mmap instead of open/read.
.SH "WISH LIST"
The format of the output should definitely be improved. Not clear how.
Their could be some fancy option to link two instances of the command
-running on different machines to reduce network disk accesses. Again,
-this may not help much, for the reason given above.
+running on different machines to reduce network disk accesses. This
+may not help much though.
.SH "EXAMPLES"
.fi
List groups of files with same content which exist both in
\fB./tralala/\fR and \fB./cuicui/\fR. Do not show group IDs, instead
-write an empty lines between groups of files of same content.
+write empty lines between groups of files of same content.
.SH "AUTHOR"