-.TH "FINDDUP" "1.1" "Apr 2010" "Francois Fleuret" "User Commands"
+.TH "FINDDUP" "1.2" "Apr 2011" "Francois Fleuret" "User Commands"
\" This man page was written by Francois Fleuret <francois@fleuret.org>
\" and is distributed under a Creative Commons Attribution-Share Alike
.SH "SYNOPSIS"
-\fBfinddup\fP [OPTION]... [DIR1 [[and:|not:]DIR2]]
+\fBfinddup\fP [OPTION]... [DIR-OR-FILE1 [[and:|not:]DIR-OR-FILE2]]
.SH "DESCRIPTION"
With two directories, it prints either the files common to both DIR1
and DIR2 or, with the `not:' prefix, the ones present in DIR1 and not
in DIR2. The `and:' prefix is assumed by default and necessary only if
-you have a directory name starting with `not:'.
+you have a directory name starting with `not:'. Files are handled like
+directories containing a single file.
This command compares files by first comparing their sizes, hence goes
reasonably fast.
\fB-t\fR, \fB--time-sort\fR
sort files in each group according to the modification times
.TP
+\fB-q\fR, \fB--trim-first\fR
+do not print the first file in each group
+.TP
\fB-p\fR, \fB--show-progress\fR
show progress information in stderr
.TP
\fB-e \fI<command>\fR, \fB--exec \fI<command>\fR
execute the provided command for each group of identical files, with
their names as arguments
+.TP
+\fB-f \fI<string>\fR, \fB--result-prefix \fI<string>\fR
+for each group of identical files, write one result file whose name is
+the given prefix string followed by the group number, and containing
+one file name per line
.SH "BUGS"
Here are the things I tried, which did not help at all: (1) Computing
md5s on the whole files, which is not satisfactory because files are
-often not read entirely, hence the md5s can not be properly computed,
+often not read entirely, hence the md5s cannot be properly computed,
(2) computing XORs of the first 4, 16 and 256 bytes with rejection as
soon as one does not match, (3) reading files in parts of increasing
sizes so that rejection could be done with only a small fraction read
The format of the output should definitely be improved. Not clear how.
-Their could be some fancy option to link two instances of the command
+There could be some fancy option to link two instances of the command
running on different machines to reduce network disk accesses. This
may not help much though.
ignore empty files, and ignore files and directories starting with a
dot.
+.B finddup -qtg
+
+.fi
+List all files which are duplicated in the current directory, do not
+show the oldest in each each group of identical ones, and do not show
+group numbers. This is what you could use to list what files to
+remove.
+
.P
.B finddup sources not:/mnt/backup