Old Linux version booted after upgrade to Ubuntu 19.10

Bug #1843222 reported by BertN45
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
casper (Ubuntu)
New
Undecided
Unassigned
grub2 (Ubuntu)
New
Undecided
Unassigned
zfs-linux (Ubuntu)
New
Undecided
Unassigned

Bug Description

I have a triple boot with the following systems:
- Xubuntu 19.10 from zfs 0.8.1
- Ubuntu Mate 18.04.3 from zfs 0.7.12
- Ubuntu 19.10 from ext4

Upgrading the first system Xubuntu to 19.10 worked fine and I was very happy with the almost perfect result and the nice grub-menu.
Upgrading to Ubuntu 19.10 created the following problems:
- That system booted after the upgrade to Ubuntu 19.10 in Linux 5.0 with zfs 0.7.12
- All grub entries with the ZFS systems disappeared and the whole nice grub-menu was gone.

Running update-grub and grub install; I did see the Linux 5.2 version appear, but it still booted from 5.0.

There were some error messages about mounting/importing during the zfs part of the upgrade, but they were the same as the ones during the Xubuntu upgrade and that upgrade worked perfectly.

ProblemType: Bug
DistroRelease: Ubuntu 19.10
Package: zfsutils-linux 0.8.1-1ubuntu11
ProcVersionSignature: Ubuntu 5.0.0-27.28-generic 5.0.21
Uname: Linux 5.0.0-27-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0ubuntu7
Architecture: amd64
CurrentDesktop: ubuntu:GNOME
Date: Mon Sep 9 01:37:21 2019
InstallationDate: Installed on 2019-03-10 (183 days ago)
InstallationMedia: Ubuntu 18.04.2 LTS "Bionic Beaver" - Release amd64 (20190210)
SourcePackage: zfs-linux
UpgradeStatus: Upgraded to eoan on 2019-09-09 (0 days ago)
modified.conffile..etc.sudoers.d.zfs: [inaccessible: [Errno 13] Permission denied: '/etc/sudoers.d/zfs']

Revision history for this message
BertN45 (lammert-nijhof) wrote :
Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

Thanks for testing the upgrade to 19.10. The only reason you would boot with an older version of a kernel and zfs itself is due to your update between 19.04 to 19.10 failed. Can you share the upgrade logs /var/log/upgrade content and /var/log/dist-upgrade?

The weird part is that do-release-upgrade would have warned you about a failed upgrade.

Revision history for this message
BertN45 (lammert-nijhof) wrote : Re: [Bug 1843222] Re: Old Linux version booted after upgrade to Ubuntu 19.10

I did see a number of red error messages, but I did see the same or
almost the same error messages in the upgrade of my zfs  based Xubuntu
and that one worked afterwards without any problem. So I probably
ignored those error messages.

On 9/9/19 2:35 a. m., Didier Roche wrote:
> Thanks for testing the upgrade to 19.10. The only reason you would boot
> with an older version of a kernel and zfs itself is due to your update
> between 19.04 to 19.10 failed. Can you share the upgrade logs
> /var/log/upgrade content and /var/log/dist-upgrade?
>
> The weird part is that do-release-upgrade would have warned you about a
> failed upgrade.
>

Revision history for this message
BertN45 (lammert-nijhof) wrote :

Two other relevant files from /boot/grub.
"grub.cfg" the one used with 5.0 and "grub.cfg.new" NOT used and with Linux 5.2.

Revision history for this message
BertN45 (lammert-nijhof) wrote :

It has been an error during the upgrade itself, since a fresh install solved my problem.

Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

The upgrade failed, in your logs, you have:
"/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-5.2.0-15-generic
I: The initramfs will attempt to resume from /dev/sda2
I: (UUID=bf02ddd4-8d65-40d6-ab24-4fc8a5673dc6)
I: Set the RESUME variable to override this.
/etc/kernel/postinst.d/zz-update-grub:
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.2.0-15-generic
Found initrd image: /boot/initrd.img-5.2.0-15-generic
Found linux image: /boot/vmlinuz-5.0.0-27-generic
Found initrd image: /boot/initrd.img-5.0.0-27-generic
Found linux image: /boot/vmlinuz-5.0.0-25-generic
Found initrd image: /boot/initrd.img-5.0.0-25-generic
cannot open 'This': no such pool
run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 1
dpkg: error processing package linux-image-5.2.0-15-generic (--configure):
 installed linux-image-5.2.0-15-generic package post-installation script subprocess returned error exit status 1
Processing triggers for dbus (1.12.14-1ubuntu2) ...
Processing triggers for initramfs-tools (0.133ubuntu10) ...
update-initramfs: Generating /boot/initrd.img-5.2.0-15-generic
I: The initramfs will attempt to resume from /dev/sda2
I: (UUID=bf02ddd4-8d65-40d6-ab24-4fc8a5673dc6)
I: Set the RESUME variable to override this.
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.39.2-3) ...
Processing triggers for rygel (0.38.1-2ubuntu2) ...
Errors were encountered while processing:
 zfsutils-linux
 zfs-initramfs
 friendly-recovery
 zfs-zed
 grub-pc
 linux-image-5.2.0-15-generic
Log ended: 2019-09-09 01:10:12"

See the "cannot open 'This': no such pool" while updating grub. This is why you have kernel 5.0 after it and don't have latest zfs after upgrade.

You mentionned multiple times a "custom" grub script, does it have "This" somewhere? We don't have any "This" in the distribution script.

Revision history for this message
BertN45 (lammert-nijhof) wrote :

I did add an entry with two /boot/grub/grub.cfg files and one did include my 40_custom file. That file did include "This" but in the first comment line as shown below:

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
menuentry 'Xubuntu 18.04' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-93b2f0baee676e9a' {

I scanned /boot/grub/grub.cfg on "This" and it only occurs in comment lines. The grub.cfg.new completely misses those text-includes from 40_custom, as I would expect. But it did include booting from Linux 5.2.

I had the same problem, when upgrading the last of my three Ubuntus, that one also booted from Linux 5.0. The first upgrade went OK without problems. Maybe it started to go wrong after I upgraded one of the two datapools. That datapool could not be mounted or imported anymore. The second and third upgrade went wrong, any relation?

Revision history for this message
Didier Roche-Tolomelli (didrocks) wrote :

Can you try to reproduce an upgrade without your 40_custom file?

I don't think your pool manual upgrade has any link to this.

Revision history for this message
BertN45 (lammert-nijhof) wrote :

My laptop has been completely upgraded. I will try it on one of the two systems on my Ryzen desktop both still running 19.04.

Revision history for this message
BertN45 (lammert-nijhof) wrote :

OK I did upgrade one of my desktop OSes (ubuntu Mate 19.04). It has been
running on a Ryzen 3 2200G, but initially it has been installed on a
Phenom II X4 B97 with a GeForce 8400GS. So maybe you see some nvidea
related error too, I did not purge that module, I did forget all about
it. This system is used with QEMU/KVM. I can't upgrade the other OS,
because it runs my main VMs for

- office-work (Xubunty 18.04.3),

- banking (Ubuntu 16.04.6),

- Dutch TV viewer (Windows 10)

and other VMs with Virtualbox. Virtualbox does not support Linux 5.3 yet.

The upgrade reported errors on the ZFS upgrade and my datapools, except
"systems", were not mounted. The symptoms were slightly different; Linux
5.3 has been loaded this time and "sudo zpool import -f vms" did not
work and wanted me to change the name, because it was an existing pool.

The disk configuration is as shown by my main 19.04 system is:

               capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
archives     286G   254G      0      0  2.87K  3.81K
   sdc4       286G   254G      0      0  2.87K  3.81K
----------  -----  -----  -----  -----  -----  -----
hp-data      266G   370G      0      3  21.1K  43.5K
   sda2      95.3G   167G      0      0  8.08K  15.9K
   sdb2      73.9G  96.1G      0      1  4.24K  12.6K
   sdc3      97.0G   107G      0      1  7.08K  13.9K
logs            -      -      -      -      -      -
   sdd7       132K   992M      0      0  1.68K  1.12K
cache           -      -      -      -      -      -
   sdd4      6.92M  9.86G      0      0    160  9.41K
----------  -----  -----  -----  -----  -----  -----
systems     20.0G  30.5G     21     17   632K   427K
   sdd6      20.0G  30.5G     21     17   632K   427K
----------  -----  -----  -----  -----  -----  -----
vms          286G   202G     13     10   858K   211K
   sda1       112G  89.9G      5      3   325K  70.6K
   sdb1      75.9G  51.1G      3      2   235K  41.1K
   sdc2      98.3G  60.7G      4      3   297K  58.8K
logs            -      -      -      -      -      -
   sdd3       256K  2.92G      0      0  1.68K  40.9K
cache           -      -      -      -      -      -
   sdd2       598M  33.6G      0      7    160   722K
----------  -----  -----  -----  -----  -----  -----

sdd is a 128 GB SSD, and sdd1 is swap and sdd8 is a free partition of 14
GB. sdb is a laptop HDD :)

I added the upgrade logs.

Good luck

Bert Nijhof

On 23/9/19 5:37 a. m., Didier Roche wrote:
> Can you try to reproduce an upgrade without your 40_custom file?
>
> I don't think your pool manual upgrade has any link to this.
>

Revision history for this message
BertN45 (bertnijhof) wrote :

Note that I only can use the datapools after I exported and re-imported them again. But I have to do that on all boots, so I wrote a small script for it. I did not upgrade the datapool itself, because 2 of the 3 systems still run Ubuntu 19.04 one also from the zfs systems datapool and the other from ext4 on sdc5 with swap on sdc6.

Revision history for this message
BertN45 (bertnijhof) wrote :

Some additional info from conky about the configuration:

Norbert (nrbrtx)
tags: removed: eoan
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.