This is a weird corner case. Extending an lvmraid(7) type1 mirror for the second time seems to result in the mirror legs not getting synced, *if* there is another type1 mirror in the vg. This reliably reproduces for me:
# quickly fill two 10G files with random data
openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 of=pv1.img iflag=fullblock
openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 of=pv2.img iflag=fullblock
# change loop devices if you have loads of snaps in use
losetup /dev/loop10 pv1.img
losetup /dev/loop11 pv2.img
pvcreate /dev/loop10
pvcreate /dev/loop11
vgcreate testvg /dev/loop10 /dev/loop11
# this will sync OK, observe kernel message for output from md subsys noting time taken
#
lvextend -L+2G testvg/test2
watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg
# wait for sync
# this will FAIL to sync, the sync will seem to complete instantly, e.g:
# Feb 02 15:22:50 asr-host kernel: md: resync of RAID array mdX
# Feb 02 15:22:50 asr-host kernel: md: mdX: resync done.
#
lvextend -L+2G testvg/test2
This is a weird corner case. Extending an lvmraid(7) type1 mirror for the second time seems to result in the mirror legs not getting synced, *if* there is another type1 mirror in the vg. This reliably reproduces for me:
# quickly fill two 10G files with random data 1024*1024) ) count=10 of=pv1.img iflag=fullblock 1024*1024) ) count=10 of=pv2.img iflag=fullblock
openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*
openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*
# change loop devices if you have loads of snaps in use
losetup /dev/loop10 pv1.img
losetup /dev/loop11 pv2.img
pvcreate /dev/loop10
pvcreate /dev/loop11
vgcreate testvg /dev/loop10 /dev/loop11
lvcreate --type raid1 -L2G -n test testvg action, sync_percent, raid_mismatch_ count testvg
watch lvs -o +raid_sync_
# wait for sync
lvcreate --type raid1 -L2G -n test2 testvg action, sync_percent, raid_mismatch_ count testvg
watch lvs -o +raid_sync_
# wait for sync
# this will sync OK, observe kernel message for output from md subsys noting time taken action, sync_percent, raid_mismatch_ count testvg
#
lvextend -L+2G testvg/test2
watch lvs -o +raid_sync_
# wait for sync
# this will FAIL to sync, the sync will seem to complete instantly, e.g:
# Feb 02 15:22:50 asr-host kernel: md: resync of RAID array mdX
# Feb 02 15:22:50 asr-host kernel: md: mdX: resync done.
#
lvextend -L+2G testvg/test2
lvchange --syncaction check testvg/test2 action, sync_percent, raid_mismatch_ count testvg
watch lvs -o +raid_sync_
# observe error count
This may cause administrator alarm unnecessarily ... :/
For some reason the precise sizes with which the LVs are created, and are then extended by, do appear to matter.
ProblemType: Bug ature: Ubuntu 4.15.0- 43.46-generic 4.15.18 .etc.lvm. lvm.conf: 2018-07- 22T18:30: 15.470358
DistroRelease: Ubuntu 18.04
Package: lvm2 2.02.176-4.1ubuntu3
ProcVersionSign
Uname: Linux 4.15.0-43-generic x86_64
ApportVersion: 2.20.9-0ubuntu7.5
Architecture: amd64
Date: Sat Feb 2 15:33:16 2019
ProcEnviron:
TERM=screen
PATH=(custom, no user)
LANG=en_GB.UTF-8
SHELL=/bin/bash
SourcePackage: lvm2
UpgradeStatus: No upgrade log present (probably fresh install)
mtime.conffile.