Deploy Changes button show up without discard icon

Bug #1450937 reported by Angel Berrios
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Fix Committed
High
Artem Roma
6.0.x
Invalid
High
Artem Roma

Bug Description

I have an operational FUEL V6 environment upgraded from 5.1 that triggers disk configuration changes without showing the discard changes icon.

The initial environment was a four node like this:

id | status | name | cluster | ip | mac | roles | pending_roles | online | group_id
---|--------|--------------|---------|------------|-------------------|-----------------|---------------|--------|---------
20 | ready | Compute02 | 8 | 10.20.0.32 | 56:3b:3a:b7:0d:44 | cinder, compute | | True | 1
18 | ready | Controller01 | 8 | 10.20.0.30 | 5e:31:a7:1b:33:47 | controller | | True | 1
19 | ready | Compute01 | 8 | 10.20.0.31 | ae:fd:d2:c9:de:45 | cinder, compute | | True | 1
22 | ready | Storage01 | 8 | 10.20.0.33 | b2:21:a0:aa:c4:46 | cinder | | True | 1

after two month operating normally a "disk change" was detected:

 8 | operational | HLPRCLOUD01 | multinode | 2 | [{u'node_id': 18, u'name': u'disks'}, {u'node_id': 19, u'name': u'disks'}, {u'node_id': 20, u'name': u'disks'}] | None

giving up to find a way to undo those changes, I created a new operational environment of two nodes like this:

[root@fuelnode fuel]# fuel --env 14 node list
id | status | name | cluster | ip | mac | roles | pending_roles | online | group_id
---|--------|--------------|---------|------------|-------------------|--------------------|---------------|--------|---------
25 | ready | COMPUTE01 | 14 | 10.20.0.34 | 9e:09:8c:67:8f:40 | cinder, compute | | True | 6
26 | ready | CONTROLLER01 | 14 | 10.20.0.35 | ca:6a:86:e3:85:4d | cinder, controller | | True | 6

after a week operating normally the same situation happened:

id | status | name | mode | release_id | changes | pending_release_id
---|-------------|-------------|------------|------------|---------------------------------------|-------------------
14 | operational | HLPRCLOUD02 | ha_compact | 37 | [{u'node_id': 25, u'name': u'disks'}] | None

looking into the node 25 "changes" I was able to identify that the "changes" are indeed the cinder volumes I have created for VM in the cloud (see screenshot).

Revision history for this message
Angel Berrios (aberriosdavila) wrote :
Changed in fuel:
milestone: none → 6.1
assignee: nobody → Fuel Python Team (fuel-python)
importance: Undecided → High
Mike Scherbakov (mihgen)
tags: added: customer-found
Changed in fuel:
status: New → Confirmed
Revision history for this message
Angel Berrios (aberriosdavila) wrote :

After further review the "disk change" for node-25 (COMPUTE01) was triggered April 29.

Including the nailgun logs for this event.

Revision history for this message
Vitaly Kramskikh (vkramskikh) wrote :

In 6.1 in Fuel UI we don't take into account "changes" attribute of cluster anymore as it makes no sense to track disk and interface configuration changes - we don't support changing disk and interface configuration after deployment. So this bug should be gone for 6.1 (though "changes" attribute is left in API output for backward compatibility or something).

tags: added: ui
Revision history for this message
Angel Berrios (aberriosdavila) wrote :

@Vitaly

This is basically my point.

There were no changes, the system reflect changes that does not happened and the consequence is that I have a broken system that I could not operate anymore, unless there is a way to revert this state that I am not aware.

Revision history for this message
Dmitry Pyzhov (dpyzhov) wrote :

This bug will disappear in 6.1

Changed in fuel:
status: Confirmed → Fix Committed
Revision history for this message
Angel Berrios (aberriosdavila) wrote :

@Dmitry

Just to be clear and for the sake of other people that might come across the same.

The situation shown for environment(cluster) #14 in this bug report is either:

1. Hit the [Deploy changes] button;

>> expected aftermath: Fuel will try to make a new node deployment and thus erase all the VM's already allocated in that node.

2. Do not hit the [Deploy changes] and continue to run as it is;

>> expected aftermath: You will run a Cloud environment that you will never be able to add/remove nodes without performing a new (clean) deployment.

Is this correct?

Revision history for this message
Evgeniy L (rustyrobot) wrote :

Hi Angel,

To get the nodes for deployment Fuel uses nodes' statuses, which you can see with "fuel node" command in column "status", if node's status is "ready", then no node erasing will be performed, so you can hit deploy changes button safely unless you set node for deletion.

Revision history for this message
Angel Berrios (aberriosdavila) wrote : Re: [Bug 1450937] Re: Deploy Changes button show up without discard icon
Download full text (3.6 KiB)

Evgenly L:

It will not try to re-deploy with pending disk changes?

On Wed, May 6, 2015 at 9:23 AM, Evgeniy L <email address hidden>
wrote:

> Hi Angel,
>
> To get the nodes for deployment Fuel uses nodes' statuses, which you can
> see with "fuel node" command in column "status", if node's status is
> "ready", then no node erasing will be performed, so you can hit deploy
> changes button safely unless you set node for deletion.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1450937
>
> Title:
> Deploy Changes button show up without discard icon
>
> Status in Fuel: OpenStack installer that works:
> Fix Committed
> Status in Fuel for OpenStack 6.0.x series:
> Confirmed
>
> Bug description:
> I have an operational FUEL V6 environment upgraded from 5.1 that
> triggers disk configuration changes without showing the discard
> changes icon.
>
> The initial environment was a four node like this:
>
> id | status | name | cluster | ip | mac |
> roles | pending_roles | online | group_id
>
> ---|--------|--------------|---------|------------|-------------------|-----------------|---------------|--------|---------
> 20 | ready | Compute02 | 8 | 10.20.0.32 | 56:3b:3a:b7:0d:44 |
> cinder, compute | | True | 1
> 18 | ready | Controller01 | 8 | 10.20.0.30 | 5e:31:a7:1b:33:47 |
> controller | | True | 1
> 19 | ready | Compute01 | 8 | 10.20.0.31 | ae:fd:d2:c9:de:45 |
> cinder, compute | | True | 1
> 22 | ready | Storage01 | 8 | 10.20.0.33 | b2:21:a0:aa:c4:46 |
> cinder | | True | 1
>
> after two month operating normally a "disk change" was detected:
>
> 8 | operational | HLPRCLOUD01 | multinode | 2 |
> [{u'node_id': 18, u'name': u'disks'}, {u'node_id': 19, u'name':
> u'disks'}, {u'node_id': 20, u'name': u'disks'}] | None
>
> giving up to find a way to undo those changes, I created a new
> operational environment of two nodes like this:
>
> [root@fuelnode fuel]# fuel --env 14 node list
> id | status | name | cluster | ip | mac |
> roles | pending_roles | online | group_id
>
> ---|--------|--------------|---------|------------|-------------------|--------------------|---------------|--------|---------
> 25 | ready | COMPUTE01 | 14 | 10.20.0.34 | 9e:09:8c:67:8f:40 |
> cinder, compute | | True | 6
> 26 | ready | CONTROLLER01 | 14 | 10.20.0.35 | ca:6a:86:e3:85:4d |
> cinder, controller | | True | 6
>
> after a week operating normally the same situation happened:
>
> id | status | name | mode | release_id | changes
> | pending_release_id
>
> ---|-------------|-------------|------------|------------|---------------------------------------|-------------------
> 14 | operational | HLPRCLOUD02 | ha_compact | 37 | [{u'node_id':
> 25, u'name': u'disks'}] | None
>
> looking into the node 25 "changes" I was able to identify that the
...

Read more...

Revision history for this message
Vitaly Kramskikh (vkramskikh) wrote :

If it doesn't erase existing nodes and UI doesn't use this field anymore, I think we just need to remove it from CLI so it won't confuse users.

Revision history for this message
Angel Berrios (aberriosdavila) wrote :

Vitaly:

Could confirm that it does not. I was able to successfully apply the changes without loosing the environment.

The button message is deceiving. When I bring the two new components I had over 20 vm's running in the original setup. In the initial setup any change in disks configuration will trigger a deployment of the node. So when you receive this message in a production setup you do not want to risk the consequences of the same behaviour.

I agree that is at least confusing.

Thank you.

Revision history for this message
Dmitry Pyzhov (dpyzhov) wrote :

We need to remove 'changes' attribute from python-fuelclient

Changed in fuel:
status: Fix Committed → Confirmed
tags: added: module-client
Changed in fuel:
assignee: Fuel Python Team (fuel-python) → Artem Roma (aroma-x)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to python-fuelclient (master)

Fix proposed to branch: master
Review: https://review.openstack.org/182647

Changed in fuel:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to python-fuelclient (master)

Reviewed: https://review.openstack.org/182647
Committed: https://git.openstack.org/cgit/stackforge/python-fuelclient/commit/?id=e4ebbc720c2d5e4546b51758c5796821ed4377f6
Submitter: Jenkins
Branch: master

commit e4ebbc720c2d5e4546b51758c5796821ed4377f6
Author: Artem Roma <email address hidden>
Date: Wed May 13 15:29:49 2015 +0300

    Remove "changes" column for 'env --list' command output

    Tests' infrastructure was also updates to remove generating of fake
    changes.

    Change-Id: Ie005c965fdc1349abf8141b7df6467dfe73d2e92
    Closes-Bug: #1450937

Changed in fuel:
status: In Progress → Fix Committed
Revision history for this message
Artem Roma (aroma-x) wrote :

Move bug for 6.0.1 milestone to "invalid" as we don't have versioning of fuel client yet. Fix will be applied to clusters with any version after upgrading of the FUEL master node to versions >= 6.1 so the issue shouldn't reproduce for them.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.