I suppose that the rename could be only temporary while both disks are
connected, if so configured.
After some further testing, it seems that the bug in mdadm is a bit more
general. In --incremental mode it goes ahead and adds removed disks to
the array, so even if you explicitly --fail and --remove one of the
disks from the array, a reboot or other event that causes mdadm
--incremental to be run will put the disk back in the array. The only
acceptable state a disk should be activated in by --incremental other
than in sync is failed. Once it has been removed it should be left alone.
The degraded case seems to just be a more specific way of encountering
this bug since it marks the disk as removed.
I suppose that the rename could be only temporary while both disks are
connected, if so configured.
After some further testing, it seems that the bug in mdadm is a bit more
general. In --incremental mode it goes ahead and adds removed disks to
the array, so even if you explicitly --fail and --remove one of the
disks from the array, a reboot or other event that causes mdadm
--incremental to be run will put the disk back in the array. The only
acceptable state a disk should be activated in by --incremental other
than in sync is failed. Once it has been removed it should be left alone.
The degraded case seems to just be a more specific way of encountering
this bug since it marks the disk as removed.