Section: User Commands (1.2)
Updated: Apr 2011



finddup - Find files common to two directories (or not)



finddup [OPTION]... [DIR-OR-FILE1 [[and:|not:]DIR-OR-FILE2]]



With one directory as argument, finddup prints the duplicated files found in it. If no directory is provided, it uses the current one as default.

With two directories, it prints either the files common to both DIR1 and DIR2 or, with the `not:' prefix, the ones present in DIR1 and not in DIR2. The `and:' prefix is assumed by default and necessary only if you have a directory name starting with `not:'. Files are handled like directories containing a single file.

This command compares files by first comparing their sizes, hence goes reasonably fast.

When looking for identical files, finddup associates a group ID to every content, and prints it along the file names. Use the -g to switch it off.

Note that finddup DIR is virtually the same as finddup -i DIR DIR



-v, --version
print the version number and exit
-h, --help
print the help and exit
-d, --ignore-dots
ignore files and directories starting with a dot
-0, --ignore-empty
ignore empty files
-c, --hide-matchings
do not show which files from DIR2 correspond to files from DIR1 (hence, show only the files from DIR1 which have an identical twin in DIR2)
-g, --no-group-ids
do not show the file group IDs
-t, --time-sort
sort files in each group according to the modification times
-q, --trim-first
do not print the first file in each group
-p, --show-progress
show progress information in stderr
-r, --real-paths
show the real path of the files
-i, --same-inodes-are-different
files with same inode are considered as different
-e <command>, --exec <command>
execute the provided command for each group of identical files, with their names as arguments
-f <string>, --result-prefix <string>
for each group of identical files, write one result file whose name is the given prefix string followed by the group number, and containing one file name per line



None known, probably many. Valgrind does not complain though.

Since files with same inodes are considered as different when looking for duplicates in a single directory, there are weird behaviors -- not bugs -- with hard links.

The current algorithm is dumb, as it does not use any hashing of the file content.

Here are the things I tried, which did not help at all: (1) Computing md5s on the whole files, which is not satisfactory because files are often not read entirely, hence the md5s can not be properly computed, (2) computing XORs of the first 4, 16 and 256 bytes with rejection as soon as one does not match, (3) reading files in parts of increasing sizes so that rejection could be done with only a small fraction read when possible, (4) using mmap instead of open/read.



The format of the output should definitely be improved. Not clear how.

Their could be some fancy option to link two instances of the command running on different machines to reduce network disk accesses. This may not help much though.



finddup -p0d blah

List duplicated files in directory ./blah/, show a progress bar, ignore empty files, and ignore files and directories starting with a dot.

finddup -qtg

List all files which are duplicated in the current directory, do not show the oldest in each each group of identical ones, and do not show group numbers. This is what you could use to list what files to remove.

finddup sources not:/mnt/backup

List all files found in ./sources/ which do not have content-matching equivalent in /mnt/backup/.

finddup -g tralala cuicui

List groups of files with same content which exist both in ./tralala/ and ./cuicui/. Do not show group IDs, instead write empty lines between groups of files of same content.



Written by Francois Fleuret <> and distributed under the terms of the GNU General Public License version 3 as published by the Free Software Foundation. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.



This document was created by using the manual pages.
Time: 22:15:38 GMT, April 19, 2014