Functional health sub test failure in fast networks when all list running together

Bug #1483250 reported by Rawan Herzallah
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Invalid
High
Fuel QA Team

Bug Description

Description:
------------------
In fast networks , in some environments a failure happens when running the functional tests full list together. The following sub test fails after less than a second when running the full list:
Create volume and attach it to instance

It is possible to re-run the failing sub test alone from the UI and it always passes then. Therefore, it seems that it can be solved by adding a small sleep between the volume tests (boot from volume and create a volume).

Steps to reproduce:
------------------------------
1. Create a centos environment 3 physical slaves {controller, cinder, compute}, IBP , on a VM Fuel Master over fast networks
2. Deploy changes
3. Run Health tests {smoke, sanity, HA, cloud validation}

--> Error:
Functional tests
Create volume and attach it to instance
Failed to get to expected status. In error state. Please refer to OpenStack logs for more details.
Target component: Compute

Scenario:
1. Create a new small-size volume.
2. Wait for volume status to become "available".

--> it stops here and shows the above error.

VERSION:
---------------
VERSION:
  feature_groups:
    - mirantis
  production: "docker"
  release: "6.1"
  openstack_version: "2014.2.2-6.1"
  api: "1.0"
  build_number: "525"
  build_id: "2015-06-19_13-02-31"
  nailgun_sha: "dbd54158812033dd8cfd7e60c3f6650f18013a37"
  python-fuelclient_sha: "4fc55db0265bbf39c369df398b9dc7d6469ba13b"
  astute_sha: "1ea8017fe8889413706d543a5b9f557f5414beae"
  fuel-library_sha: "2e7a08ad9792c700ebf08ce87f4867df36aa9fab"
  fuel-ostf_sha: "8fefcf7c4649370f00847cc309c24f0b62de718d"
  fuelmain_sha: "a3998372183468f56019c8ce21aa8bb81fee0c2f"

Revision history for this message
Rawan Herzallah (rherzallah) wrote :

attached is fuel diagnostic snapshot

description: updated
Changed in fuel:
assignee: nobody → Fuel QA Team (fuel-qa)
importance: Undecided → High
status: New → Confirmed
milestone: none → 7.0
tags: added: customer-found
Revision history for this message
Nastya Urlapova (aurlapova) wrote :

@Rawan, could you give details about - "In fast networks , in some environments", what is it exactly mean?

Changed in fuel:
status: Confirmed → Incomplete
Revision history for this message
Rawan Herzallah (rherzallah) wrote :

@Nastya Urlapova it happens in environments with fast Ethernet speed rate and it doesn't happen in all setups. If we run the whole list the subtest fails; however, if we re-run the sub test that failed from UI after a while (which is no longer than 15 seconds) it passes.

tags: added: module-ostf
Revision history for this message
oleksii shyman (oshyman) wrote :
Download full text (4.8 KiB)

I've analyzed logs. The issue is not related with networking - it is problems with free space on cinder node. I think this bug can be closed

PROOF:

1) error itself

LOG LOCATION - 10.20.0.2/var/log/docker-logs/ostf.log

2015-08-09 17:19:25 DEBUG (test_mixins) Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/fuel_health/common/test_mixins.py", line 176, in verify
    result = func(*args, **kwargs)
  File "/usr/lib/python2.6/site-packages/fuel_health/tests/smoke/test_create_volume.py", line 49, in _wait_for_volume_status
    self.status_timeout(self.volume_client.volumes, volume.id, status)
  File "/usr/lib/python2.6/site-packages/fuel_health/test.py", line 129, in status_timeout
    conf.compute.build_interval):
  File "/usr/lib/python2.6/site-packages/fuel_health/test.py", line 63, in call_until_true
    elif func():
  File "/usr/lib/python2.6/site-packages/fuel_health/test.py", line 119, in check_status
    self.fail("Failed to get to expected status. "
  File "/usr/lib/python2.6/site-packages/unittest2/case.py", line 415, in fail
    raise self.failureException(msg)
AssertionError: Failed to get to expected status. In error state.

2015-08-09 17:19:25 ERROR (nose_storage_plugin) fuel_health.tests.smoke.test_create_volume.VolumesTest.test_volume_create
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/unittest2/case.py", line 340, in run
    testMethod()
  File "/usr/lib/python2.6/site-packages/fuel_health/tests/smoke/test_create_volume.py", line 86, in test_volume_create
    volume, 'available')
  File "/usr/lib/python2.6/site-packages/fuel_health/common/test_mixins.py", line 182, in verify
    " Please refer to OpenStack logs for more details.")
  File "/usr/lib/python2.6/site-packages/unittest2/case.py", line 415, in fail
    raise self.failureException(msg)
AssertionError: Step 2 failed: Failed to get to expected status. In error state. Please refer to OpenStack logs for more details.

2) 10 seconds before I've found following message in the same file

LOG LOCATION - 10.20.0.2/var/log/docker-logs/ostf.log

2015-08-09 17:19:15 DEBUG (test) Waiting for <Volume: c623d24e-d900-40b0-9a07-c3e3eebbe995> to get to available status. Currently in creating status
2015-08-09 17:19:15 DEBUG (test) Sleeping for 10 seconds

3) Analysis of cinder log showed the following

LOG LOCATION - 10.20.0.2/var/log/docker-logs/remote/node-5.mirantis.com/cinder-volume.log

2015-08-09T17:13:06.221644+00:00 info: Volume ff2f4f61-e937-4280-8ec8-f15a72eb6490: being created as raw with specification:{'status': u'creating', 'volume_size': 1, 'volume_name': ...

Read more...

Revision history for this message
oleksii shyman (oshyman) wrote :

I've pasted a wrong piece of logs in previous comment related to step#3
correct one is

2015-08-09T17:19:15.133839+00:00 info: Volume c623d24e-d900-40b0-9a07-c3e3eebbe995: being created as raw with specification: {'status': u'creating', 'volume_size': 1, 'volume_name': u'volume-c623d24e-d900-40b0-9a07-c3e3eebbe995'}

2015-08-09T17:19:15.489468+00:00 err: Error creating Volume
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm Traceback (most recent call last):
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm File "/usr/lib/python2.6/site-packages/cinder/brick/local_dev/lvm.py", line 474, in create_volume
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm run_as_root=True)
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 142, in execute
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm return processutils.execute(*cmd, **kwargs)
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 200, in execute
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm cmd=sanitized_cmd)
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm ProcessExecutionError: Unexpected error while running command.
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -n volume-c623d24e-d900-40b0-9a07-c3e3eebbe995 cinder -L 1g
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm Exit code: 5
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm Stdout: u''
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm Stderr: u' Volume group "cinder" has insufficient free space (129 extents): 256 required.\n'
2015-08-09 17:19:15.491 27629 TRACE cinder.brick.local_dev.lvm

Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :

Due to Oleksii explanation bug is closed as invalid

Changed in fuel:
status: Incomplete → Invalid
tags: added: area-ostf
removed: module-ostf
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.