btrfs-device - manage devices of btrfs filesystems
btrfs device <subcommand> <args>
The btrfs device
command group is used to manage devices of the btrfs
Btrfs filesystem can be created on top of single or multiple block devices. Data
and metadata are organized in allocation profiles with various redundancy
policies. There’s some similarity with traditional RAID levels, but
this could be confusing to users familiar with the traditional meaning. Due to
the similarity, the RAID terminology is widely used in the documentation. See
(8) for more details and the exact profile capabilities and
The device management works on a mounted filesystem. Devices can be added,
removed or replaced, by commands provided by btrfs device
The profiles can be also changed, provided there’s enough workspace to do
the conversion, using the btrfs balance
command and namely the filter
A profile describes an allocation policy based
on the redundancy/replication constraints in connection with the number of
devices. The profile applies to data and metadata block groups
Where applicable, the level refers to a
profile that matches constraints of the standard RAID levels. At the moment
the supported ones are: RAID0, RAID1, RAID10, RAID5 and RAID6.
See the section TYPICAL USECASES
for some examples.
Add device(s) to the filesystem identified by
If applicable, a whole device discard (TRIM) operation is performed prior to
adding the device. A device with existing filesystem detected by
(8) will prevent device addition and has to be forced.
Alternatively the filesystem can be wiped from the device using eg. the
The operation is instant and does not affect existing data. The operation merely
adds the device to the filesystem structures and creates some block groups
do not perform discard (TRIM) by default
force overwrite of existing filesystem on the
Remove device(s) from a filesystem identified
Device removal must satisfy the profile constraints, otherwise the command
fails. The filesystem must be converted to profile(s) that would allow the
removal. This can typically happen when going down from 2 devices to 1 and
using the RAID1 profile. See the example section below.
The operation can take long as it needs to move all data from the device.
It is possible to delete the device that was used to mount the filesystem. The
device entry in mount table will be replaced by another device name with the
lowest device id.
Alias of remove kept for backward
Wait until all devices of a multiple-device
filesystem are scanned and registered within the kernel module. This is to
provide a way for automatic filesystem mounting tools to wait before the mount
can start. The device scan is only one of the preconditions and the mount can
fail for other reasons. Normal users usually do not need this command and may
safely ignore it.
Scan devices for a btrfs filesystem and
register them with the kernel module. This allows mounting multiple-device
filesystem by specifying just one from the whole group.
If no devices are passed, all block devices that blkid reports to contain btrfs
The options --all-devices
are deprecated and kept for
backward compatibility. If used, behavior is the same as if no devices are
The command can be run repeatedly. Devices that have been already registered
remain as such. Reloading the kernel module will drop this information.
There’s an alternative way of mounting multiple-device filesystem
without the need for prior scanning. See the mount option device
Read and print the device IO error statistics
for all devices of the given filesystem identified by <path>
for a single <device>
. The filesystem must be mounted. See
section DEVICE STATS
for more information about the reported statistics
and the meaning.
Print the stats and reset the values to zero
Check if the stats are all zeros and return 0
it it is so. Set bit 6 of the return code if any of the statistics is no-zero.
The error values is 65 if reading stats from at least one device failed,
otherwise it’s 64.
Show detailed information about internal
allocations in devices.
raw numbers in bytes, without the B
print human friendly numbers, base 1024, this
is the default
print human friendly numbers, base 1000
select the 1024 base for the following
options, according to the IEC standard
select the 1000 base for the following
options, according to the SI standard
show sizes in KiB, or kB with --si
show sizes in MiB, or MB with --si
show sizes in GiB, or GB with --si
show sizes in TiB, or TB with --si
If conflicting options are passed, the last one takes precedence.
Assume we’ve created a filesystem on a block device /dev/sda
(data/metadata), the device size is 50GiB and
we’ve used the whole device for the filesystem. The mount point is
The amount of data stored is 16GiB, metadata have allocated 2GiB.
ADD NEW DEVICE
We want to increase the total size of the filesystem and keep the profiles. The
size of the new device /dev/sdb
$ btrfs device add /dev/sdb /mnt
The amount of free data space increases by less than 100GiB, some space is
allocated for metadata.
CONVERT TO RAID1
Now we want to increase the redundancy level of both data and metadata, but
we’ll do that in steps. Note, that the device sizes are not equal and
we’ll use that to show the capabilities of split data/metadata and
The constraint for RAID1 gives us at most 50GiB of usable space and exactly 2
copies will be stored on the devices.
First we’ll convert the metadata. As the metadata occupy less than 50GiB
and there’s enough workspace for the conversion process, we can do:
$ btrfs balance start -mconvert=raid1 /mnt
This operation can take a while as the metadata have to be moved and all block
pointers updated. Depending on the physical locations of the old and new
blocks, the disk seeking is the key factor affecting performance.
You’ll note that the system block group has been also converted to RAID1,
this normally happens as the system block group also holds metadata (the
physical to logical mappings).
•available data space decreased by
3GiB, usable roughly (50 - 3) + (100 - 3) = 144 GiB
•metadata redundancy increased
IOW, the unequal device sizes allow for combined space for data yet improved
redundancy for metadata. If we decide to increase redundancy of data as well,
we’re going to lose 50GiB of the second device for obvious reasons.
$ btrfs balance start -dconvert=raid1 /mnt
The balance process needs some workspace (ie. a free device space without any
data or metadata block groups) so the command could fail if there’s too
much data or the block groups occupy the whole first device.
The device size of /dev/sdb
as seen by the filesystem remains unchanged,
but the logical space from 50-100GiB will be unused.
The device stats keep persistent record of several error classes related to
doing IO. The current values are printed at mount time and updated during
filesystem lifetime or from a scrub run.
$ btrfs device stats /dev/sda3
Failed writes to the block devices, means that
the layers beneath the filesystem were not able to satisfy the write
Read request analogy to write_io_errs.
Number of failed writes with the FLUSH
flag set. The flushing is a method of forcing a particular order between write
requests and is crucial for implementing crash consistency. In case of btrfs,
all the metadata blocks must be permanently stored on the block device before
the superblock is written.
A block checksum mismatched or a corrupted
metadata header was found.
The block generation does not match the
expected value (eg. stored in the parent node).
returns a zero exit status if it succeeds. Non zero is
returned in case of failure.
If the -s
option is used, btrfs device stats
will add 64 to the
exit status if any of the error counters is non-zero.
is part of btrfs-progs. Please refer to the btrfs wiki
for further details.