'fuel2 graph execute' with --noop option causes node(s) redeploy

Bug #1621808 reported by Alexandra
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Invalid
High
Fuel Toolbox

Bug Description

'fuel2 graph execute' command with --noop option run node(s) deployment

Actions:
1) Run graph execute with noop
[root@nailgun ~]# fuel2 graph execute -e 1 -n 1 -t default --noop
Deployment task with id 29 for the environment 1 has been started.

2) Check fuel tasks:
[root@nailgun ~]# fuel task
29 | running | deploy | 1 | 0 | 857d278d-8178-4e92-becd-691c084e02be
30 | running | deployment | 1 | 0 | 6129d653-386f-4361-827d-de808df3a5c8

There should be 'dry_run_deployment' task instead of deploy/deployment

3) Check node(s) state
[root@nailgun ~]# fuel node
id | status | name | cluster | ip | mac | roles | pending_roles | online | group_id
---+-----------+---------------------+---------+-------------+-------------------+------------+---------------+--------+---------
 6 | ready | slave-06_cinder | 1 | 10.109.13.8 | 64:bf:80:3e:c2:16 | cinder | | 1 | 1
 1 | deploying | slave-01_controller | 1 | 10.109.13.3 | 64:28:51:54:57:3f | controller | | 1 | 1
 3 | ready | slave-03_controller | 1 | 10.109.13.5 | 64:c1:3f:bf:61:cd | controller | | 1 | 1
 5 | ready | slave-05_compute | 1 | 10.109.13.7 | 64:a7:16:b7:72:73 | compute | | 1 | 1
 2 | ready | slave-02_controller | 1 | 10.109.13.4 | 64:96:7e:5a:3d:a7 | controller | | 1 | 1
 4 | ready | slave-04_compute | 1 | 10.109.13.6 | 64:f2:6f:2c:f2:f3 | compute | | 1 | 1

Node 1 is in 'deploying' state. Should be 'ready' since --noop option used

Environment:
9.1 snapshot#231 + applied workarounds for Noop run support
3 controllers + 2 Computes + 1 Cinder
Neutron with VLAN segmentation

Revision history for this message
Alexandra (aallakhverdieva) wrote :

Diagnostic snapshot (same configuration, but 9.1 snapshot #239) - https://drive.google.com/open?id=0BxmbtGZe1aPtUEVJWVlTamY5a3c

Changed in fuel:
importance: Undecided → High
assignee: nobody → Fuel Toolbox (fuel-toolbox)
milestone: none → 9.1
status: New → Confirmed
tags: added: area-python
Revision history for this message
Bulat Gaifullin (bulat.gaifullin) wrote :

1. the task name should be 'deployment' when it is hoop-deployment. there is no different task name for noop deployment when graph execute is used.
2. 'deploying' state for node shows if node in progress state. when hoop-deployment runs - node also switches in progress state, when task will complete, node returns to its previous state. it is implementation details of 'graph execute' command.
Also please check that real tasks does not start when noop deployment is used.

Changed in fuel:
status: Confirmed → Incomplete
Dmitry Pyzhov (dpyzhov)
Changed in fuel:
milestone: 9.1 → 9.2
Revision history for this message
Alexandra (aallakhverdieva) wrote :

I've checked that no changes done during graph execution with --noop, i.e my actions:
- change /etc/nova/nova.conf
- run # fuel2 graph execute -e 1 -f -t default --noop
- check /etc/nova/nova.conf after task finish

As result I see my changes in /etc/nova/nova.conf. It's good

But according to below links noop run shouldn't change cluster or nodes state/status:
https://mirantis.jira.com/browse/PROD-6522 (see Description)
https://github.com/openstack/fuel-specs/blob/master/specs/10.0/puppet-noop-run.rst

I suppose we need to mention behavior of graph noop execution if it's really expected

Changed in fuel:
status: Incomplete → New
Revision history for this message
Vladimir Sharshov (vsharshov) wrote :

It's expected behavior: we change node status temporary to deploying to show progress of noop operation. After it Nailgun should move nodes to previous state: provisioned, ready or error.

Thank you for your attention to this behavior. I think is good idea to add information about such behavior into user documentation.

Changed in fuel:
status: New → Invalid
tags: added: release-notes
Changed in fuel:
milestone: 9.2 → 9.1
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.