+None known, probably many. Valgrind does not complain though.
+
+Since files with same inodes are considered as different when looking
+for duplicates in a single directory, there are weird behaviors -- not
+bugs -- with hard links.
+
+The current algorithm is dumb, as it does not use any hashing of the
+file content.
+
+Here are the things I tried, which did not help at all: (1) Computing
+md5s on the whole files, which is not satisfactory because files are
+often not read entirely, hence the md5s can not be properly computed,
+(2) computing XORs of the first 4, 16 and 256 bytes with rejection as
+soon as one does not match, (3) reading files in parts of increasing
+sizes so that rejection could be done with only a small fraction read
+when possible, (4) using mmap instead of open/read.
+
+.SH "WISH LIST"
+
+The format of the output should definitely be improved. Not clear how.
+
+Their could be some fancy option to link two instances of the command
+running on different machines to reduce network disk accesses. This
+may not help much though.
+
+.SH "EXAMPLES"
+
+.B finddup -p0d blah
+
+.fi
+List duplicated files in directory ./blah/, show a progress bar,
+ignore empty files, and ignore files and directories starting with a
+dot.
+
+.P
+.B finddup sources not:/mnt/backup
+
+.fi
+List all files found in \fB./sources/\fR which do not have
+content-matching equivalent in \fB/mnt/backup/\fR.
+
+.P
+.B finddup -g tralala cuicui
+
+.fi
+List groups of files with same content which exist both in
+\fB./tralala/\fR and \fB./cuicui/\fR. Do not show group IDs, instead
+write empty lines between groups of files of same content.