Multiple tempest tests related to volume failed on fs020 which got skipped in this review https://review.opendev.org/#/c/701403/ and these failed tests gets triggered in fs021 to find and debug the issue.
For example:
{{0} tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment [440.572185s] ... FAILED
{1} tempest.api.compute.volumes.test_attach_volume.AttachVolumeShelveTestJSON.test_attach_volume_shelved_or_offload_server [673.744471s] ... FAILED
Captured traceback-2:
~~~~~~~~~~~~~~~~~~~~~
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/tempest/common/waiters.py", line 215, in wait_for_volume_resource_status
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: volume 574c8e12-e5b6-4221-9a31-db6097f7a476 failed to reach available status (current reserved) within the required time (300 s).
Captured traceback:
~~~~~~~~~~~~~~~~~~~
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/tempest/api/compute/volumes/test_attach_volume.py", line 255, in test_attach_volume_shelved_or_offload_server
server, validation_resources, num_vol + 1)
File "/usr/lib/python2.7/site-packages/tempest/api/compute/volumes/test_attach_volume.py", line 234, in _unshelve_server_and_check_volumes
'ACTIVE')
File "/usr/lib/python2.7/site-packages/tempest/common/waiters.py", line 96, in wait_for_server_status
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: (AttachVolumeShelveTestJSON:test_attach_volume_shelved_or_offload_server) Server ccf85cec-e8c3-4b1a-8e15-6270e2e64385 failed to reach ACTIVE status and task state "None" within the required time (300 s). Current status: SHELVED_OFFLOADED. Current task state: spawning.
Below is the list of other tests which got failed:
* tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment
* tempest.api.compute.volumes.test_attach_volume.AttachVolumeShelveTestJSON.test_attach_volume_shelved_or_offload_server
* tempest.api.compute.volumes.test_attach_volume_negative.AttachVolumeNegativeTest.test_attach_attached_volume_to_different_server
* tempest.api.compute.admin.test_volumes_negative.VolumesAdminNegativeTest.test_update_attached_volume_with_nonexistent_volume_in_body
* tempest.api.compute.volumes.test_attach_volume_negative.AttachVolumeNegativeTest.test_attach_attached_volume_to_same_server
* tempest.api.compute.volumes.test_attach_volume_negative.AttachVolumeNegativeTest.test_delete_attached_volume
* tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_attached_volume
* tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario [
* tempest.scenario.test_stamp_pattern.TestStampPattern.test_stamp_pattern
* tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_with_volume_attached
And many others
While looking at nova-compute logs http://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-ovb-1ctlr_2comp-featureset021-master/9cad320/logs/overcloud-novacompute-0/var/log/containers/nova/nova-compute.log
"2020-01-27 11:47:07.505 8 DEBUG os_brick.initiator.connectors.iscsi [req-576d1a32-ba56-4e5b-ade7-b7d8b733c1d2 89cf2d21771a4690ae41516837ff07eb 9943b7e169624779a7077676dc58e1ad - default default] iscsi session list stdout= stderr=iscsiadm: No active sessions.
_run_iscsi_session /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:1113
2020-01-27 11:47:07.505 8 WARNING os_brick.initiator.connectors.iscsi [req-576d1a32-ba56-4e5b-ade7-b7d8b733c1d2 89cf2d21771a4690ae41516837ff07eb 9943b7e169624779a7077676dc58e1ad - default default] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions.
2020-01-27 11:47:07.889 8 DEBUG nova.compute.provider_tree [req-00a1af11-5fbc-4deb-99c8-27b44afab732 785ce0b818f24cf8b2d43f3b40abb9d3 1ceecda10db945e48e69aedb2f0417c2 - default default] Inventory has not changed in ProviderTree for provider: 47fe23d0-af6a-4528-a394-4ef122e47656 update_inventory /usr/lib/python2.7/site-packages/nova/compute/provider_tree.py:181
2020-01-27 11:47:07.893 8 DEBUG nova.virt.libvirt.driver [req-00a1af11-5fbc-4deb-99c8-27b44afab732 785ce0b818f24cf8b2d43f3b40abb9d3 1ceecda10db945e48e69aedb2f0417c2 - default default] Libvirt baseline CPU <cpu>
<arch>x86_64</arch>
<model>qemu64</model>
<vendor>Intel</vendor>
<topology sockets="4" cores="1" threads="1"/>
</cpu>
_get_guest_baseline_cpu_features /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:10532
2020-01-27 11:47:07.919 8 DEBUG nova.scheduler.client.report [req-00a1af11-5fbc-4deb-99c8-27b44afab732 785ce0b818f24cf8b2d43f3b40abb9d3 1ceecda10db945e48e69aedb2f0417c2 - default default] Inventory has not changed for provider 47fe23d0-af6a-4528-a394-4ef122e47656 based on inventory data: {u'VCPU': {u'allocation_ratio': 16.0, u'total': 4, u'reserved': 0, u'step_size': 1, u'min_unit': 1, u'max_unit': 4}, u'MEMORY_MB': {u'allocation_ratio': 1.0, u'total': 8191, u'reserved': 512, u'step_size': 1, u'min_unit': 1, u'max_unit': 8191}, u'DISK_GB': {u'allocation_ratio': 1.0, u'total': 79, u'reserved': 0, u'step_size': 1, u'min_unit': 1, u'max_unit': 79}} set_inventory_for_provider /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:897
2020-01-27 11:47:07.920 8 DEBUG oslo_concurrency.lockutils [req-00a1af11-5fbc-4deb-99c8-27b44afab732 785ce0b818f24cf8b2d43f3b40abb9d3 1ceecda10db945e48e69aedb2f0417c2 - default default] Lock "compute_resources" released by "nova.compute.resource_tracker.abort_instance_claim" :: held 0.471s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:370
2020-01-27 11:47:07.921 8 ERROR nova.compute.manager [req-00a1af11-5fbc-4deb-99c8-27b44afab732 785ce0b818f24cf8b2d43f3b40abb9d3 1ceecda10db945e48e69aedb2f0417c2 - default default] [instance: ccf85cec-e8c3-4b1a-8e15-6270e2e64385] Instance failed to spawn: VolumeDeviceNotFound: Volume device not found at .
It might be something related to tripleo_iscsi and it listens to 3260 port.
On checking the firewall rule http://logs.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-7-ovb-1ctlr_2comp-featureset021-master/9cad320/logs/overcloud-controller-0/etc/sysconfig/iptables on controller there is no rule associated with that
Fix proposed to branch: master /review. opendev. org/704962
Review: https:/