[nsxv] OSTF fails after compute-vmware node has been deleted

Bug #1593773 reported by Andrey Setyaev
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel NSXv plugin
Fix Released
High
Artem Savinov

Bug Description

fuel: fuel-9.0-mos-485
nsxv: custom build

    1. Connect to a Fuel web UI with preinstalled plugin.
    2. Create a new environment with following parameters:
        * Compute: KVM/QEMU with vCenter
        * Networking: Neutron with VLAN segmentation
        * Storage: default
        * Additional services: default
    3. Add nodes with following roles:
        * Controller
        * Controller
        * Controller
        * ComputeVMware
    4. Configure interfaces on nodes.
    5. Configure network settings.
    6. Enable and configure NSXv plugin.
    7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers and compute-vmware.
    8. Deploy cluster.
    9. Run OSTF.
    10. Add node with CinderVMware role.
        Redeploy cluster.
    11. Run OSTF.
    12. Remove node with CinderVMware role.
        Redeploy cluster.
    13. Run OSTF.
    14. Remove node with ComputeVMware role.
        Redeploy cluster.
    15. Run OSTF.

AssertionError: Failed 1 OSTF tests; should fail 0 tests. Names of failed tests:
  - vCenter: Check network connectivity from instance via floating IP (failure) Instance is not reachable by IP. Please refer to OpenStack logs for more details.

Tags: nsxv
Revision history for this message
Andrey Setyaev (asetyaev-9) wrote :
Igor Zinovik (izinovik)
Changed in fuel-plugin-nsxv:
assignee: Partner Centric Engineering (fuel-partner-engineering) → Igor Zinovik (izinovik)
Revision history for this message
Igor Zinovik (izinovik) wrote :

Several attempts of test were successful, but last one failed:
ostf.log:
2016-06-17 13:54:44 SUCCESS vCenter: Check network connectivity from instance via floating IP
...
2016-06-17 14:18:25 SUCCESS vCenter: Check network connectivity from instance via floating IP
...
2016-06-17 14:40:11 SUCCESS vCenter: Check network connectivity from instance via floating IP
...
2016-06-17 15:03:11 FAILURE vCenter: Check network connectivity from instance via floating IP

For some reason router port become down:
nova-api.log:RESP BODY: {"floatingip": {"router_id": null, "status": "DOWN", "description": "", "tenant_id": "d93e4f376642415fa645c3547e9aad42", "floating_network_id": "0f3d7c83-4e7d-44cf-ba0c-a1762b6c653d", "fixed_ip_address": null, "floating_ip_address": "172.16.211.104", "port_id": null, "id": "f6d09d89-b37e-41c4-8447-940e48eb0667"}}

Test tried to reach that IP address 172.16.211.104:
nailgun.test.domain.local/var/log/remote/127.0.0.1/ostf.log:2016-06-17T15:03:11.314082+00:00 info: fuel_health.tests.smoke.test_vcenter: INFO: is address is 172.16.211.104
 nailgun.test.domain.local/var/log/remote/127.0.0.1/ostf.log:2016-06-17T15:03:11.314294+00:00 info: fuel_health.tests.smoke.test_vcenter: DEBUG: 172.16.211.104
 nailgun.test.domain.local/var/log/remote/127.0.0.1/ostf.log:2016-06-17T15:03:11.327138+00:00 info: SSHExecCommandFailed: Command 'ping -q -c1 -w10 172.16.211.104', exit status: 1, Error:
 nailgun.test.domain.local/var/log/remote/127.0.0.1/ostf.log:2016-06-17T15:03:11.327338+00:00 info: PING 172.16.211.104 (172.16.211.104) 56(84) bytes of data.
 nailgun.test.domain.local/var/log/remote/127.0.0.1/ostf.log:2016-06-17T15:03:11.327533+00:00 info: --- 172.16.211.104 ping statistics ---
 nailgun.test.domain.local/var/log/remote/127.0.0.1/ostf.log:2016-06-17T15:03:11.328149+00:00 info: fuel_health.nmanager: DEBUG: Command 'ping -q -c1 -w10 172.16.211.104', exit status: 1, Error:
 nailgun.test.domain.local/var/log/remote/127.0.0.1/ostf.log:2016-06-17T15:03:11.328401+00:00 info: PING 172.16.211.104 (172.16.211.104) 56(84) bytes of data.
 nailgun.test.domain.local/var/log/remote/127.0.0.1/ostf.log:2016-06-17T15:03:11.328617+00:00 info: --- 172.16.211.104 ping statistics ---

Changed in fuel-plugin-nsxv:
status: New → Triaged
Revision history for this message
Igor Zinovik (izinovik) wrote :

Could not reproduce bug on Fuel 9.0 #495 with fuel-plugin-nsxv 3.0 #771.

It is worth to mention that I passed reduced scenario:
1. Deploy 1 controller. Run OSTF.
2. Deploy 1 compute-vmware. Run OSTF.
3. Delete compute-vmware node. Run OSTF.

I will try to reproduce full scenario today.

Revision history for this message
Igor Zinovik (izinovik) wrote :

Could not reproduce bug reporters scenario on Fuel 9.0 #495 with
nsxv-3.0 #802. All OSTF tests succeeded.

For now marking as Invalid.

Changed in fuel-plugin-nsxv:
status: Triaged → Invalid
Revision history for this message
Artem Savinov (asavinov) wrote :

Tests continue to fall - you need additional study.

Changed in fuel-plugin-nsxv:
assignee: Igor Zinovik (izinovik) → Artem Savinov (asavinov)
status: Invalid → Triaged
Artem Savinov (asavinov)
Changed in fuel-plugin-nsxv:
status: Triaged → In Progress
Revision history for this message
Artem Savinov (asavinov) wrote :

This bug in test nsxv_add_delete_nodes. In https://github.com/openstack/fuel-qa/blob/stable/mitaka/fuelweb_test/models/fuel_web_client.py#L704-L724 we set conpute-vmware as target_node_1. target_node_1 used in https://github.com/openstack/fuel-qa/blob/stable/mitaka/fuelweb_test/models/fuel_web_client.py#L704-L724. Vcenter "Cluster1" assigned to target_node_1(compute-vmware) and vcenter "Cluster2" assigned to target_node_2(controller). After delete compute-vmware node - vcenter "Cluster1" reassigned to controller node- that is not supported and causes an error. As a result, the VM starts on Cluster2 (the controller node is not reconfigured), but NSXv configured to pass traffic for Cluster1.

Revision history for this message
Artem Savinov (asavinov) wrote :
Artem Savinov (asavinov)
Changed in fuel-plugin-nsxv:
status: In Progress → Fix Committed
Revision history for this message
Ilya Bumarskov (ibumarskov) wrote :

Fix was checked on build #837

Changed in fuel-plugin-nsxv:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.