Activity log for bug #1970236

Date Who What changed Old value New value Message
2022-04-25 15:20:40 Filofel bug added bug
2022-04-25 15:22:31 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads!) So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better a longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads!) So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better a longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 15:23:43 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads!) So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better a longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads!) So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better a longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 15:24:59 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads!) So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better a longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better a longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 15:25:32 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better a longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better and longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 15:26:08 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better and longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better and longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 21:32:59 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation in the Grub documentation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better and longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware). So I solved the problem by re-running a grub-install on that disk with parameter --disk-module=ahci The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better and longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware): grub-install --disk-module=ahci /dev/sdaX The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 21:36:30 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag. The Ubuntu bootable partition is a plain 4TB ext4. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better and longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware): grub-install --disk-module=ahci /dev/sdaX The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag set. The Ubuntu bootable partition is a plain 4TB ext4 filling up the rest of the disk. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. Using Grub rescue showed this happened when loading the new kernel. Previous linux images were still booting. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better and longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware): grub-install --disk-module=ahci /dev/sdaX The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 21:41:23 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag set. The Ubuntu bootable partition is a plain 4TB ext4 filling up the rest of the disk. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. Using Grub rescue showed this happened when loading the new kernel. Previous linux images were still booting. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. The native Grub drivers don't have this limitation (empirical find, when they are used, the same kernel file loads - and using them fixes the problem for good). So when using native grub drivers (ahci in my case), everything works. Native drivers can be activated from Grub Rescue using nativedisk But a better and longer-lasting solution is to insert them into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware): grub-install --disk-module=ahci /dev/sdaX The problem with that approach is that any further grub-install without those parms (like an Ubuntu software update or upgrade might decide to do?) will zap the native driver from the Grub partition, and break the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag set. The Ubuntu bootable partition is a plain 4TB ext4 filling up the rest of the disk. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. Using Grub rescue showed this happened when loading the new kernel. Previous linux images were still booting. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. When using native grub drivers (ahci in my case), everything works. Reliably, consistently, permanently. Reverting back to default (BIOS) grub drivers breaks the boot again. Native drivers can be activated from Grub Rescue using Grub command nativedisk But a better and longer-lasting solution is to insert the native driver into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware). E.g. something like: grub-install --disk-module=ahci /dev/sdaX The problem with that approach is that any further grub-install without this parameter (like an Ubuntu software update or upgrade might decide to do?) might zap the native driver from the Grub partition, reinstalling the default driver, and breaking the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 21:42:25 Filofel description Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag set. The Ubuntu bootable partition is a plain 4TB ext4 filling up the rest of the disk. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. Using Grub rescue showed this happened when loading the new kernel. Previous linux images were still booting. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. When using native grub drivers (ahci in my case), everything works. Reliably, consistently, permanently. Reverting back to default (BIOS) grub drivers breaks the boot again. Native drivers can be activated from Grub Rescue using Grub command nativedisk But a better and longer-lasting solution is to insert the native driver into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware). E.g. something like: grub-install --disk-module=ahci /dev/sdaX The problem with that approach is that any further grub-install without this parameter (like an Ubuntu software update or upgrade might decide to do?) might zap the native driver from the Grub partition, reinstalling the default driver, and breaking the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up... Description: Ubuntu 20.04.4 LTS Release: 20.04 grub-pc 2.04-1ubuntu26.15 Booting Ubuntu 20.04.4 from a 4TiB partition partition on a 4TiB GPT disk. I have grub bootblocks installed in a bios-grub partition, sectors 34-2047, bios-grub flag set. The Ubuntu bootable partition is a plain 4TB ext4 filling up the rest of the disk. Suddenly, after a routine automatic Ubuntu kernel update, the boot started to break with message: "error: attempt to read or write outside of disk (hd0)." Boot-Repair didn't find nor fix anything. fscheck found nothing bad. Using Grub rescue showed this happened when loading the new kernel. Previous linux images were still booting. After a painful search, I realized that part of the new kernel file had been allocated by the filesystem above the 2TiB limit... Some more investigation suggested that by default, Grub uses BIOS drivers to load files from the target partition. This is tersely documented in the Grub 2.06 documentation, in the "nativedisk" command paragraph. And the BIOS drivers are limited to 32-bit sector addresses, i.e. 2TiB. When using native grub drivers (ahci in my case), everything works. Reliably, consistently, permanently. Reverting back to default (BIOS) grub drivers breaks the boot again. Native drivers can be activated from Grub Rescue using Grub command nativedisk But a better and longer-lasting solution is to insert the native driver into the bootblocks by running grub-install with a parameter such as --disk-module=ahci (could be ahci, ehci, ATA, ohci or uhci according to harware). E.g. something like: grub-install --disk-module=ahci /dev/sdaX The problem with that approach is that any further grub-install without this parameter (like an Ubuntu software update or upgrade might decide to do?) might zap the native driver from the Grub partition, reinstalling the default driver, and breaking the boot again. grub-install (and/or update-grub) should never generate a potential broken boot when it can avoid it: Couldn't it (shouldn't it) detect when one of the boot partitions in the boot menu crosses the 2TiB mark, give a warning, and generate a grub-install with the appropriate --disk-module=MODULE parameter? 4TB SSD disk prices dropping fast (below 350€ these days). This problem might increasingly show up...
2022-04-25 21:54:25 Filofel summary Grub2 bios-install defaults to BIOS disk drivers Grub2 bios-install defaults to BIOS disk drivers, breaks boot
2022-04-25 21:54:54 Filofel summary Grub2 bios-install defaults to BIOS disk drivers, breaks boot Grub2 bios-install defaults to BIOS disk drivers, may break large disk boot