According to the btrfs document[1], adding another disk to the existing btrfs file system is equivalent to create a raid with Raid 1 on Meta, Raid 0 on Data group (mkfs.btrfs -m raid1 -d raid0).
So when wiping one of the disks out, it will corrupt the data but the meta should be OK:
# btrfs fi show
Label: none uuid: 3e7e0fb5-5fec-4938-bc20-0e5dfdf466ff
Total devices 2 FS bytes used 128.00KiB
devid 1 size 512.00MiB used 88.00MiB path /dev/loop0
*** Some devices missing
This behaviour in this test case has been changed in the kernel with commit 4330e183c9537df20952d4a9ee142c536fb8ae54, we should be able to mount the drive with 'rw' in degraded mode, unless the data chunk was destroyed.
According to the btrfs document[1], adding another disk to the existing btrfs file system is equivalent to create a raid with Raid 1 on Meta, Raid 0 on Data group (mkfs.btrfs -m raid1 -d raid0).
So when wiping one of the disks out, it will corrupt the data but the meta should be OK: 5fec-4938- bc20-0e5dfdf466 ff
# btrfs fi show
Label: none uuid: 3e7e0fb5-
Total devices 2 FS bytes used 128.00KiB
devid 1 size 512.00MiB used 88.00MiB path /dev/loop0
*** Some devices missing
This behaviour in this test case has been changed in the kernel with commit 4330e183c9537df 20952d4a9ee142c 536fb8ae54, we should be able to mount the drive with 'rw' in degraded mode, unless the data chunk was destroyed.
[1] https:/ /btrfs. wiki.kernel. org/index. php/Using_ Btrfs_with_ Multiple_ Devices