Last night I ran into the same problem. I upgraded from 12.04 LTS to 16.04.1 LTS Server and got stuck at boot.
The last message complained about a UUID not being present. It turned out it was the /usr FS. Doing an "lvm lvscan" from the initrd prompt showed all but one LVs inactive. The only one active was bootvg/root.
I then booted via rescue system and added "lvm vgchange -ay" in /usr/share/initramfs-tools/scripts/local-top/lvm2 right before "exit 0". After running "update-initramfs -k all -c" and rebooting the server got up again.
The bootvg is on a RAID1 disk controlled via mdadm.
mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Dec 20 16:49:58 2014
Raid Level : raid1
Array Size : 971924032 (926.90 GiB 995.25 GB)
Used Dev Size : 971924032 (926.90 GiB 995.25 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Jan 23 09:50:47 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : XXX:1
UUID : xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxx814e
Events : 21001
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
2 8 3 1 active sync /dev/sda3
Last night I ran into the same problem. I upgraded from 12.04 LTS to 16.04.1 LTS Server and got stuck at boot. initramfs- tools/scripts/ local-top/ lvm2 right before "exit 0". After running "update-initramfs -k all -c" and rebooting the server got up again.
The last message complained about a UUID not being present. It turned out it was the /usr FS. Doing an "lvm lvscan" from the initrd prompt showed all but one LVs inactive. The only one active was bootvg/root.
I then booted via rescue system and added "lvm vgchange -ay" in /usr/share/
The bootvg is on a RAID1 disk controlled via mdadm.
mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Dec 20 16:49:58 2014
Raid Level : raid1
Array Size : 971924032 (926.90 GiB 995.25 GB)
Used Dev Size : 971924032 (926.90 GiB 995.25 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Jan 23 09:50:47 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : XXX:1 xxxxxxxx: xxxxxxxx: xxxx814e
UUID : xxxxxxxx:
Events : 21001
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
2 8 3 1 active sync /dev/sda3
pvs
PV VG Fmt Attr PSize PFree
/dev/md1 bootvg lvm2 a-- 926.90g 148.90g
The /boot FS is on sda1/sdb1 also via RAID1
sda2 and sda2 are swap
fdisk -l /dev/sda
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1026047 1024000 500M fd Linux raid autodetect
/dev/sda2 1026048 9414655 8388608 4G 82 Linux swap / Solaris
/dev/sda3 9414656 1953525167 1944110512 927G fd Linux raid autodetect
lsb_release -rd
Description: Ubuntu 16.04.1 LTS
Release: 16.04
dpkg -l lvm2
ii lvm2 2.02.133-1ubuntu amd64 Linux Logical Volume Manager