-The current algorithm is dumb, that is it does not use any hashing of
-the file content. I tried md5 on the whole file, which is not
-satisfactory because files are often never read entirely hence the md5
-can not be properly computed. I also tried XOR of the first 4, 16 and
-256 bytes with rejection as soon as one does not match. Did not help
-either.
+Since files with same inodes are considered as different when looking
+for duplicates in a single directory, there are weird behaviors -- not
+bugs -- with hard links.
+
+The current algorithm is dumb, as it does not use any hashing of the
+file content.
+
+Here are the things I tried, which did not help at all: (1) Computing
+md5s on the whole files, which is not satisfactory because files are
+often not read entirely, hence the md5s can not be properly computed,
+(2) computing XORs of the first 4, 16 and 256 bytes with rejection as
+soon as one does not match, (3) reading files in parts of increasing
+sizes so that rejection could be done with only a small fraction read
+when possible, (4) using mmap instead of open/read.