Activity log for bug #127125

Date Who What changed Old value New value Message
2007-07-20 10:04:39 Stephan RĂ¼gamer bug added bug
2007-07-20 10:23:32 Stephan RĂ¼gamer description Binary package hint: linux-source-2.6.20 The following hardware configuration was used to install feisty, which failed (not completly): 1. OEM Hardware, 2x Dual Core Opteron, 16GB Ram, Areca SATA Raid6 Controller, 16x500GB Drives in RAID6 mode, with two volumes First Volume: 73 GB Second Volume: 6.3TB 2. HP Series: 2.1. HP DL365, 16GB Ram, 2x Dual Core Opteron P800 SmartArray Controller, 4 Internal SAS drives, external MSA 60 Storage with 12x 750GB Drives attached 3 Volumes: - First Volume: Raid1 over 2 Sata drives, 120GB, (Internal) - Second Volume: Raid1 Over 2 Sata drivers: 120GB (Internal) - Third Volume: Raid6 over 12x 750GB Drives (external storage) 2.2. DP DL320s, P400 SmartArray, 1x Dual Core Xeon, 12x 500GB internal SATA Drives 2 Volumes in Raid6 Mode: - First Volume: 80GB - Second Volume: The rest (more then 4 TB) As install media a feisty server cd image was used, d-i recognized all controllers and volumes without any problems. Creating the necessary partitions on the volumes works, too, even on the >2TB ones. The partition tables are for <2TB partitions: msdos lable, for the >2TB ones they are EIT/GPT tables, which is also correct. The reboot into the production system failed with mounting every >2TB Partitions. Logfile told me (as only error report), that the superblock on those partitions is corrupted. On all >2TB partitions, XFS as filesystem was used. The same applies to all other partitions but the boot partitions, which is ext2. Recreating the partition manually with parted and reformating the partition with mkxfs works again, and the volume is mountable. Rebooting it afterwards results in the same error message. Deleting the EIT/GPT table from the volume, and creating the XFS FS directly on the blockdevice (e.g. /dev/sdb (areca) or /dev/cciss/c0d1 (smart array controller) and changing fstab and rebooting, the volume is mounted without any error and works as expected. This setup but worked with dapper (at least with the OEM machine, because dapper doesn't know anything about the p400/p800 controller and doesn't recognize the large volumes (means >2TB), HP changed the cciss driver after dapper release). So, regarding Feisty, this is a regression. I didn't tested edgy, because we use normally dapper as Ubuntu Distro (hint on LTS). I know that this setup is not the standard, but since most DCs are in need in cheap storage solutions (SAN is not cheap ;)) machines like the DL320s from HP will become more popular. You can't even compare those machines (and their setups) with Suns X4500 (6 Raid7 Controller with 8 Drives per controller, so 48 drives in one cage). If you need more information, please contact me directly via this bug report or by eMail/jabber, the contact infos are on my launchpad page. Regards, \sh Binary package hint: linux-source-2.6.20 Update: It's 64bit Ubuntu :) The following hardware configuration was used to install feisty, which failed (not completly): 1. OEM Hardware, 2x Dual Core Opteron, 16GB Ram, Areca SATA Raid6 Controller, 16x500GB Drives in RAID6 mode, with two volumes First Volume: 73 GB Second Volume: 6.3TB 2. HP Series: 2.1. HP DL365, 16GB Ram, 2x Dual Core Opteron P800 SmartArray Controller, 4 Internal SAS drives, external MSA 60 Storage with 12x 750GB Drives attached 3 Volumes: - First Volume: Raid1 over 2 Sata drives, 120GB, (Internal) - Second Volume: Raid1 Over 2 Sata drivers: 120GB (Internal) - Third Volume: Raid6 over 12x 750GB Drives (external storage) 2.2. DP DL320s, P400 SmartArray, 1x Dual Core Xeon, 12x 500GB internal SATA Drives 2 Volumes in Raid6 Mode: - First Volume: 80GB - Second Volume: The rest (more then 4 TB) As install media a feisty server cd image was used, d-i recognized all controllers and volumes without any problems. Creating the necessary partitions on the volumes works, too, even on the >2TB ones. The partition tables are for <2TB partitions: msdos lable, for the >2TB ones they are EIT/GPT tables, which is also correct. The reboot into the production system failed with mounting every >2TB Partitions. Logfile told me (as only error report), that the superblock on those partitions is corrupted. On all >2TB partitions, XFS as filesystem was used. The same applies to all other partitions but the boot partitions, which is ext2. Recreating the partition manually with parted and reformating the partition with mkxfs works again, and the volume is mountable. Rebooting it afterwards results in the same error message. Deleting the EIT/GPT table from the volume, and creating the XFS FS directly on the blockdevice (e.g. /dev/sdb (areca) or /dev/cciss/c0d1 (smart array controller) and changing fstab and rebooting, the volume is mounted without any error and works as expected. This setup but worked with dapper (at least with the OEM machine, because dapper doesn't know anything about the p400/p800 controller and doesn't recognize the large volumes (means >2TB), HP changed the cciss driver after dapper release). So, regarding Feisty, this is a regression. I didn't tested edgy, because we use normally dapper as Ubuntu Distro (hint on LTS). I know that this setup is not the standard, but since most DCs are in need in cheap storage solutions (SAN is not cheap ;)) machines like the DL320s from HP will become more popular. You can't even compare those machines (and their setups) with Suns X4500 (6 Raid7 Controller with 8 Drives per controller, so 48 drives in one cage). If you need more information, please contact me directly via this bug report or by eMail/jabber, the contact infos are on my launchpad page. Regards, \sh
2007-12-04 18:58:50 Richard Laager marked as duplicate 107326