gparted produces unusual results when ZFS partitios exist

Bug #1808421 reported by HankB on 2018-12-13
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
gparted (Ubuntu)
Undecided
Unassigned

Bug Description

Run gparted on a disk that has ZFS partitions. It shows one entire partition. Parted seems to display correctly. gparted image attached.

hbarta@saugage:~$ sudo parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
 1 1049kB 538MB 537MB fat32 boot, esp
 2 538MB 555MB 16.8MB zfs Microsoft reserved partition msftres
 3 555MB 105GB 105GB ntfs Basic data partition msftdata
 4 105GB 420GB 315GB zfs rpool

(parted) q
hbarta@saugage:~$

hbarta@saugage:~$ sudo sgdisk -p /dev/sda
Disk /dev/sda: 1953525168 sectors, 931.5 GiB
Model: Samsung SSD 850
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): EBA09D2E-0F70-4D11-8E37-C1C170CFD9DD
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 1133243757 sectors (540.4 GiB)

Number Start (sector) End (sector) Size Code Name
   1 2048 1050623 512.0 MiB EF00
   2 1050624 1083391 16.0 MiB 0C01 Microsoft reserved ...
   3 1083392 205883391 97.7 GiB 0700 Basic data partition
   4 205883392 820283391 293.0 GiB 8300 rpool
hbarta@saugage:~$

hbarta@saugage:~$ lsb_release -rd
Description: Ubuntu 18.10
Release: 18.10
hbarta@saugage:~$

Reproduce:
1) Configure (perhaps install) Ubuntu on a drive with a ZFS partition.
2) Open gparted.

Note: Same issue happens with gparted on the 18.04.1 desktop install USB.

ProblemType: Bug
DistroRelease: Ubuntu 18.10
Package: gparted 0.32.0-1ubuntu1
ProcVersionSignature: Ubuntu 4.18.0-12.13-generic 4.18.17
Uname: Linux 4.18.0-12-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.10-0ubuntu13.1
Architecture: amd64
CurrentDesktop: ubuntu:GNOME
Date: Thu Dec 13 15:05:13 2018
ProcEnviron:
 TERM=xterm-256color
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=<set>
 LANG=en_US
 SHELL=/bin/bash
SourcePackage: gparted
UpgradeStatus: Upgraded to cosmic on 2018-12-13 (0 days ago)
mtime.conffile..etc.logrotate.d.apport: 2018-12-12T13:31:49.347693

HankB (hbarta) wrote :
HankB (hbarta) wrote :

Problem gparted display.

Curtis Gedak (gedakc) wrote :

From the description it appears that at one time the whole disk was formatted as zfs and that those file system signatures remain.

You can check what is going on with the following two commands:

  sudo blkid /dev/sda
  sudo wipefs --no-act /dev/sda

To fix you will likely need to remove the extraneous file system signatures. The wipefs command can help you remove the specific incorrect signature.

For reference, see:

zfs partition wrongly shows occupying the whole disk
https://gitlab.gnome.org/GNOME/gparted/issues/14

Curtis Gedak (gedakc) wrote :

Alternatively you may need a fixed version of blkid if blkid is reporting 2 contradictory signatures for the whole disk:

For example:
  - TYPE="zfs_member" ZFS file system on the whole disk.
  - PTTYPE="gpt" GPT (GUID Partition table).

HankB (hbarta) wrote :
Download full text (4.0 KiB)

That gitlab issue looks like what I'm seeing. I've collected the requested information. Can you confirm this is the problem?
hbarta@saugage:~$ sudo blkid /dev/sda
[sudo] password for hbarta:
/dev/sda: LABEL="rpool" UUID="4510611204828545482" UUID_SUB="9816084798696086204" TYPE="zfs_member" PTUUID="eba09d2e-0f70-4d11-8e37-c1c170cfd9dd" PTTYPE="gpt"
hbarta@saugage:~$ sudo wipefs --no-act /dev/sda
DEVICE OFFSET TYPE UUID LABEL
sda 0xe8e0d3f000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d3e000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d3d000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d3c000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d3b000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d3a000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d39000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d38000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d37000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d36000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d35000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d34000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d33000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d32000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d31000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d30000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d2f000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d2e000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d2d000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d2c000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d2b000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d2a000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d29000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d28000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d27000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d26000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d25000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d24000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d23000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d7f000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d7e000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d7d000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d7c000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d7b000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d7a000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d79000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d78000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d77000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d76000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d75000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d74000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d73000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d72000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d71000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d70000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d6f000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d6e000 zfs_member 4510611204828545482 rpool
sda 0xe8e0d6d000 zfs_member 4510611204828545482 rpool
sda 0...

Read more...

HankB (hbarta) wrote :

It looks like your comment is appropriate:
86204" TYPE="zfs_member" PTUUID="eba09d2e-0f70-4d11-8e37-c1c170cfd9dd"
PTTYPE="gpt"

Is the correct response to "follow up with the authors of the util-linux package?"

Curtis Gedak (gedakc) wrote :

Yes, the issue you are experiencing looks similar.

The next course of action is to "follow up with the authors of the util-linux package?"

They should have a mailing list or something to track issues.

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in gparted (Ubuntu):
status: New → Confirmed
BertN45 (lammert-nijhof) wrote :

If other partitions on the disk the display is correct, if only 2 zfs partitions are on the disk the display is wrong. The disk have been partitioned originally with the gnome-disk-utility.

BertN45 (lammert-nijhof) wrote :

And now the complex SDD (boot-datapool, 2xlog and 2xcache partitions) displayed correctly

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers