Comment 9 for bug 1450937

Revision history for this message
Angel Berrios (aberriosdavila) wrote : Re: [Bug 1450937] Re: Deploy Changes button show up without discard icon

Evgenly L:

It will not try to re-deploy with pending disk changes?

On Wed, May 6, 2015 at 9:23 AM, Evgeniy L <email address hidden>
wrote:

> Hi Angel,
>
> To get the nodes for deployment Fuel uses nodes' statuses, which you can
> see with "fuel node" command in column "status", if node's status is
> "ready", then no node erasing will be performed, so you can hit deploy
> changes button safely unless you set node for deletion.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1450937
>
> Title:
> Deploy Changes button show up without discard icon
>
> Status in Fuel: OpenStack installer that works:
> Fix Committed
> Status in Fuel for OpenStack 6.0.x series:
> Confirmed
>
> Bug description:
> I have an operational FUEL V6 environment upgraded from 5.1 that
> triggers disk configuration changes without showing the discard
> changes icon.
>
> The initial environment was a four node like this:
>
> id | status | name | cluster | ip | mac |
> roles | pending_roles | online | group_id
>
> ---|--------|--------------|---------|------------|-------------------|-----------------|---------------|--------|---------
> 20 | ready | Compute02 | 8 | 10.20.0.32 | 56:3b:3a:b7:0d:44 |
> cinder, compute | | True | 1
> 18 | ready | Controller01 | 8 | 10.20.0.30 | 5e:31:a7:1b:33:47 |
> controller | | True | 1
> 19 | ready | Compute01 | 8 | 10.20.0.31 | ae:fd:d2:c9:de:45 |
> cinder, compute | | True | 1
> 22 | ready | Storage01 | 8 | 10.20.0.33 | b2:21:a0:aa:c4:46 |
> cinder | | True | 1
>
> after two month operating normally a "disk change" was detected:
>
> 8 | operational | HLPRCLOUD01 | multinode | 2 |
> [{u'node_id': 18, u'name': u'disks'}, {u'node_id': 19, u'name':
> u'disks'}, {u'node_id': 20, u'name': u'disks'}] | None
>
> giving up to find a way to undo those changes, I created a new
> operational environment of two nodes like this:
>
> [root@fuelnode fuel]# fuel --env 14 node list
> id | status | name | cluster | ip | mac |
> roles | pending_roles | online | group_id
>
> ---|--------|--------------|---------|------------|-------------------|--------------------|---------------|--------|---------
> 25 | ready | COMPUTE01 | 14 | 10.20.0.34 | 9e:09:8c:67:8f:40 |
> cinder, compute | | True | 6
> 26 | ready | CONTROLLER01 | 14 | 10.20.0.35 | ca:6a:86:e3:85:4d |
> cinder, controller | | True | 6
>
> after a week operating normally the same situation happened:
>
> id | status | name | mode | release_id | changes
> | pending_release_id
>
> ---|-------------|-------------|------------|------------|---------------------------------------|-------------------
> 14 | operational | HLPRCLOUD02 | ha_compact | 37 | [{u'node_id':
> 25, u'name': u'disks'}] | None
>
> looking into the node 25 "changes" I was able to identify that the
> "changes" are indeed the cinder volumes I have created for VM in the
> cloud (see screenshot).
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/fuel/+bug/1450937/+subscriptions
>

--
Ángel Berríos Dávila
@angelberrios

*"Hemos construido un sistema que nos persuade a gastar el dinero que no
tenemos, en cosas que no necesitamos, para crear impresiones que no durarán
en personas que no nos importan"*