duperemove - Find duplicate extents and print them to stdout
duperemove [options] files...
is a simple tool for finding duplicated extents and submitting
them for deduplication. When given a list of files it will hash their contents
on a block by block basis and compare those hashes to each other, finding and
categorizing blocks that match each other. When given the -d
will submit those extents for deduplication using the Linux
kernel extent-same ioctl.
can store the hashes it computes in a hashfile
given an existing hashfile, duperemove
will only compute hashes for
those files which have changed since the last run. Thus you can run
repeatedly on your data as it changes, without having to
re-checksum unchanged data. For more on hashfiles see the --hashfile
option below as well as the Examples
can also take input from the fdupes
program, see the
Duperemove has two major modes of operation one of which is a subset of the
When run without -d
(the default) duperemove will print out one or more
tables of matching extents it has determined would be ideal candidates for
deduplication. As a result, readonly mode is useful for seeing what duperemove
might do when run with -d
. The output could also be used by some other
software to submit the extents for deduplication at a later time.
Generally, duperemove does not concern itself with the underlying representation
of the extents it processes. Some of them could be compressed, undergoing I/O,
or even have already been deduplicated. In dedupe mode, the kernel handles
those details and therefore we try not to replicate that work.
This functions similarly to readonly mode with the exception that the duplicated
extents found in our "read, hash, and compare" step will actually be
submitted for deduplication. An estimate of the total data deduplicated will
be printed after the operation is complete. This estimate is calculated by
comparing the total amount of shared bytes in each file before and after the
can refer to a list of regular files and directories or be a hyphen
(-) to read them from standard input. If a directory is specified, all regular
files within it will also be scanned. Duperemove can also be told to
recursively scan directories with the '-r' switch.
- Enable recursive dir traversal.
- De-dupe the results - only works on btrfs and xfs
- Opens files readonly when deduping; currently requires root
privileges (and is enabled by default for root). Allows use on readonly
snapshots or when the file might be open for exec.
- Print numbers in human-readable format.
- Use a file for storage of hashes instead of memory. This
option drastically reduces the memory footprint of duperemove and is
recommended when your data set is more than a few files large.
Hashfiles are also reusable, allowing you to further reduce the
amount of hashing done on subsequent dedupe runs.
If hashfile does not exist it will be created. If it exists,
duperemove will check the file paths stored inside of it for
changes. Files which have changed will be rescanned and their updated
hashes will be written to the hashfile. Deleted files will be
removed from the hashfile.
New files are only added to the hashfile if they are discoverable via
the files argument. For that reason you probably want to provide
the same files list and -r arguments on each run of
duperemove. The file discovery algorithm is efficient and will only
visit each file once, even if it is already in the hashfile.
Adding a new path to a hashfile is as simple as adding it to the
When deduping from a hashfile, duperemove will avoid deduping files which
have not changed since the last dedupe.
- Print all files in the hashfile and exit. Requires the
--hashfile option. Will print additional information about each
file when run with -v.
- -R [file]
- Remove file from the db and exit. Can be specified multiple
times. Duperemove will read the list from standard input if a hyphen (-)
is provided. Requires the --hashfile option.
Note: If you are piping filenames from another duperemove instance
it is advisable to do so into a temporary file first as running duperemove
simultaneously on the same hashfile may corrupt that hashfile.
- Run in fdupes mode. With this option you can pipe
the output of fdupes to duperemove to dedupe any duplicate files
found. When receiving a file list in this manner, duperemove will skip the
- Be verbose.
- Read data blocks and skip any zeroed blocks, useful for
speedup duperemove, but can prevent deduplication of zeroed files.
- -b size
- Use the specified block size. Raising the block size will
consume less memory but may miss some duplicate blocks. Conversely,
lowering the blocksize consumes more memory and may find more duplicate
blocks. The default blocksize of 128K was chosen with these
parameters in mind.
- Use N threads for I/O. This is used by the file hashing and
dedupe stages. Default is automatically detected based on number of host
- Use N threads for CPU bound tasks. This is used by the
duplicate extent finding stage. Default is automatically detected based on
number of host cpus.
Note: Hyperthreading can adversely affect performance of the extent
finding stage. If duperemove detects an Intel CPU with hyperthreading it
will use half the number of cores reported by the system for cpu bound
- Comma separated list of options which alter how we dedupe.
Prepend 'no' to an option in order to turn it off.
- Defaults to off. Allow dedupe of extents within the
- Defaults to on. Duperemove uses the fiemap
ioctl during the dedupe stage to optimize out already deduped extents as
well as to provide an estimate of the space saved after dedupe operations
Unfortunately, some versions of Btrfs exhibit extrmely poor performance in
fiemap as the number of references on a file extent goes up. If you are
experiencing the dedupe phase slowing down or 'locking up' this option may
give you a significant amount of performance back.
Note: This does not turn off all useage of fiemap, to disable fiemap
during the file scan stage, you will also want to use the
- Defaults to on. Duperemove submits duplicate blocks
directly to the dedupe engine.
Duperemove can optionally optimize the duplicate block lists into larger
extents prior to dedupe submission. The search algorithm used for this
however has a very high memory and cpu overhead, but may reduce the number
of extent references created during dedupe. If you'd like to try this, run
- Prints help text.
- Defaults to no. Allows duperemove to skip checksumming some
blocks by checking their extent state.
- Don't cross filesystem boundaries, this is the default
behavior since duperemove v0.11. The option is kept for backwards
- This option is primarily for testing. See the
--hashfile option if you want to use hashfiles.
Read hashes from a hashfile. A file list is not required with this option.
Dedupe can be done if duperemove is run from the same base directory as is
stored in the hash file (basically duperemove has to be able to find the
- This option is primarily for testing. See the
--hashfile option if you want to use hashfiles.
Write hashes to a hashfile. These can be read in at a later date and deduped
- Print debug messages, forces -v if selected.
- Deprecated, see --io-threads above.
- You can choose between murmur3 and xxhash. The default is
murmur3 as it is very fast and can generate 128 bit digests for a very
small chance of collision. Xxhash may be faster but generates only 64 bit
digests. Both hashes are fast enough that the default should work well for
the overwhelming majority of users.
Dedupe the files in directory /foo, recurse into all subdirectories. You only
want to use this for small data sets.
- duperemove -dr /foo
Use duperemove with fdupes to dedupe identical files below directory foo.
- fdupes -r /foo | duperemove --fdupes
Duperemove can optionally store the hashes it calculates in a hashfile.
Hashfiles have two primary advantages - memory usage and re-usability. When
using a hashfile, duperemove will stream computed hashes to it, instead of
If Duperemove is run with an existing hashfile, it will only scan those files
which have changed since the last time the hashfile was updated. The
argument controls which directories duperemove will scan for
newly added files. In the simplest usage, you rerun duperemove with the same
parameters and it will only scan changed or newly added files - see the first
Dedupe the files in directory foo, storing hashes in foo.hash. We can run this
command multiple times and duperemove will only checksum and dedupe changed or
newly added files.
- duperemove -dr --hashfile=foo.hash foo/
Don't scan for new files, only update changed or deleted files, then dedupe.
- duperemove -dr --hashfile=foo.hash
Add directory bar to our hashfile and discover any files that were recently
added to foo.
- duperemove -dr --hashfile=foo.hash foo/ bar/
List the files tracked by foo.hash.
- duperemove -L --hashfile=foo.hash
Duperemove v0.11 is fast at reading and cataloging data. Dedupe runs will be
memory limited unless the '--hashfile' option is used. '--hashfile' allows
duperemove to temporarily store duplicated hashes to disk, thus removing the
large memory overhead and allowing for a far larger amount of data to be
scanned and deduped. Realistically though you will be limited by the speed of
your disks and cpu. In those situations where resources are limited you may
have success by breaking up the input data set into smaller pieces.
When using a hashfile, duperemove will only store duplicate hashes in memory.
During normal operation then the hash tree will make up the largest portion of
dupremoves memory usage. As of Duperemove v0.11 hash entries are 88 bytes in
size. If you know the number of duplicate blocks in your data set you can get
a rough approximation of memory usage by multiplying with the hash entry size.
Actual performance numbers are dependent on hardware - up to date testing
information is kept on the duperemove wiki (see below for the link).
Hashfiles are essentially sqlite3 database files with several tables, the
largest of which are the files and hashes tables. Each hashes table entry is
under 90 bytes though that may grow as features are added. The size of a files
table entry depends on the file path but a good estimate is around 270 bytes
If you know the total number of blocks and files in your data set then you can
calculate the hashfile size as:
Hashfile Size = Num Hashes X 90 + Num Files X 270
Using a real world example of 1TB (8388608 128K blocks) of data over 1000 files:
8388608 * 90 + 270 * 1000 = 755244720 or about 720MB for 1TB spread over 1000
Yes, Duperemove uses a transactional database engine and organizes db changes to
take advantage of those features. The result is that you should be able to
ctrl-c the program at any point and re-run without experiencing corruption of
Duperemove will print out an estimate of the saved space after a dedupe
operation for you.
You can get a more accurate picture by running 'btrfs fi df' before and after
each duperemove run.
Be careful about using the 'df' tool on btrfs - it is common for space reporting
to be 'behind' while delayed updates get processed, so an immediate df after
deduping might not show any savings.
At the moment duperemove can detect that some underlying extents are shared with
other files, but it can not resolve which files those extents are shared with.
Imagine duperemove is examing a series of files and it notes a shared data
region in one of them. That data could be shared with a file outside of the
series. Since duperemove can't resolve that information it will account the
shared data against our dedupe operation while in reality, the kernel might
deduplicate it further for us.
This is a little complicated, but it comes down to a feature in Btrfs called
_bookending_. The Btrfs wiki explains this in detail:
Essentially though, the underlying representation of an extent in Btrfs can not
be split (with small exception). So sometimes we can end up in a situation
where a file extent gets partially deduped (and the extents marked as shared)
but the underlying extent item is not freed or truncated.
Yes. To be specific, duperemove does not deduplicate the data itself. It simply
finds candidates for dedupe and submits them to the Linux kernel extent-same
ioctl. In order to ensure data integrity, the kernel locks out other access to
the file and does a byte-by-byte compare before proceeding with the dedupe.
Deduplication will lead to increased fragmentation. The blocksize chosen can
have an effect on this. Larger blocksizes will fragment less but may not save
you as much space. Conversely, smaller block sizes may save more space at the
cost of increased fragmentation.
Deduplication is currently only supported by the btrfs
The Duperemove project page can be found at
There is also a wiki at http://github.com/markfasheh/duperemove/wiki
hashstats(8) filesystems(5) btrfs(8) xfs(8)