raid 1 goes across all the drives / disk management in UI not working correctly

Bug #1267569 reported by Aleksandr Shaposhnikov
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
In Progress
High
Meg McRoberts

Bug Description

Raid 1 got created across all the drives in the system.

If system has 40 drives all of them will be used for boot partition.
This isn't good behavior. Fuel should allow to select how many drives should be used for such thing. One/Two/All/Etc.

Revision history for this message
Aleksandr Shaposhnikov (alashai8) wrote :

Add:

Even if only one drive configured in Web UI all available drives will be used.

Revision history for this message
Aleksandr Shaposhnikov (alashai8) wrote :

Workaround:

Modify the following file and restart supervisor right after the mirantis-openstack masternode installation or before creating a cluster.

/opt/nailgun/lib/python2.6/site-package/nailgun/volumes/manager.py:557

-boot_is_raid = True if disks_count > 1 else False
+boot_is_raid = False

/etc/init.d/supervisord restart

Dmitry Pyzhov (dpyzhov)
Changed in fuel:
importance: Undecided → Medium
status: New → Confirmed
assignee: nobody → Fuel Python Team (fuel-python)
milestone: none → 4.1
Revision history for this message
Aleksandr Shaposhnikov (alashai8) wrote :

Workaround not working. It only disable raid but disk space still used for /boot physical volumes and all the stuff. Basically this mean that disk management system in UI not working at all.
This is critical bug.

summary: - raid 1 goes across all the drives
+ raid 1 goes across all the drives / disk management in UI not working
+ correctly
Changed in fuel:
importance: Medium → High
Revision history for this message
Mike Scherbakov (mihgen) wrote :

> This isn't good behavior.
why? What is the user impact?

Roman, why do you raise the priority to High, too?

We have boot partition on every HDD for the following reason - we can't get BIOS config - which disk to use to boot from, so we simply put MBR + /boot on every disk. We had "make bootable" button, but users do mistakes and choose wrong disks, ending up with non-booting nodes.

> Basically this mean that disk management system in UI not working at all.
please be concrete, this statement has 0 info of what is broken

If user wants exclude some disks from Fuel configuration, and he is sure that those excluded are not the first to boot in BIOS settings, then it is another story - for which we rather need a blueprint and further discussion.

Revision history for this message
Evgeniy L (rustyrobot) wrote :

So, does this ticket have High status or not? We can use checkboxes for each disk (we had a discussion in mailing list about that https://<email address hidden>/msg00365.html).

This task will require a lot of time for testing (because disk allocation logic is complicated) as result we need to make decision about priority of this task.

Revision history for this message
Mike Scherbakov (mihgen) wrote :

I'm setting status to Invalid. Please feel free to reopen and raise this question in fuel-dev ML, however it is in full accordance with design document we had for disk partitioning back in 3.2.

Changed in fuel:
status: Confirmed → Won't Fix
Changed in fuel:
status: Won't Fix → Triaged
tags: added: customer-found
Changed in fuel:
status: Triaged → Incomplete
Revision history for this message
Aleksandr Shaposhnikov (alashai8) wrote :

Customer would like to have disks in system that not used by openstack/ceph be untouched.

Real life example (consider this as extend for bug):
In fuel node's disk management section I have partitioned only sda drive. I also have sdb and sdc and leaved them unused because I plan to use them for my own on these servers. Maybe I have a backups/recovery on them or plan to use this SSD drives as b-cache. Whatever. Just leaved them unused in terms of fuel and start the deployment.

Expected behavior:
Openstack cluster is installed without any impact on unused disks. They should be in state that they've been before openstack provision/deployment.

Observed behavior:
Fuel is formatting them and create boot partition on each of them making raid 1 from all devices that I have.

Revision history for this message
Evgeniy L (rustyrobot) wrote :

Here is another workaround

I had 3 disks sda, sdb and sdc then I removed boot section via CLI

    fuel provisioning --env-id 1 --default -d

    # remove this section for sdc
    - size: 300
      type: boot
    - file_system: ext2
      mount: /boot
      name: Boot
      size: 200
      type: raid

    fuel provisioning --env-id 1 -u

    # run deployment

And there was no raid partition on sdc

    [root@node-2 ~]# cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sda3[0] sdb3[1]
          204736 blocks super 1.0 [2/2] [UU]

For the next release we decided to make a special checkbox to not allocate raid on all disks.
Here is blueprint https://blueprints.launchpad.net/fuel/+spec/nailgun-bootable-checkbox-for-disks

Changed in fuel:
milestone: 4.1 → 5.0
status: Incomplete → Triaged
Dmitry Pyzhov (dpyzhov)
Changed in fuel:
milestone: 5.0 → 5.1
Mike Scherbakov (mihgen)
tags: added: release-notes
Revision history for this message
Meg McRoberts (dreidellhasa) wrote :

Added to Known Issues in 5.0 Release Notes.

Revision history for this message
Meg McRoberts (dreidellhasa) wrote :

Listed as "Known Issue" in 5.0.1 Release Notes.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to fuel-docs (master)

Fix proposed to branch: master
Review: https://review.openstack.org/133112

Changed in fuel:
assignee: Fuel Python Team (fuel-python) → Meg McRoberts (dreidellhasa)
status: Triaged → In Progress
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.