Tests fail with "Server [UUID] failed to reach ACTIVE status and task state "None" within the required time ([INTEGER] s). Current status: BUILD. Current task state: spawning."

Bug #1788006 reported by Nate Johnston
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned
neutron
Fix Released
Critical
Slawek Kaplonski

Bug Description

Testr report: http://logs.openstack.org/74/591074/4/check/neutron-tempest-plugin-designate-scenario/d02f171/testr_results.html.gz

Job log: http://logs.openstack.org/74/591074/4/check/neutron-tempest-plugin-designate-scenario/d02f171/job-output.txt.gz#_2018-08-20_13_11_46_407077

Traceback:

2018-08-20 13:11:46.406514 | controller | Captured traceback:
2018-08-20 13:11:46.406538 | controller | ~~~~~~~~~~~~~~~~~~~
2018-08-20 13:11:46.406567 | controller | Traceback (most recent call last):
2018-08-20 13:11:46.406659 | controller | File "/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_dns_integration.py", line 115, in test_server_with_fip
2018-08-20 13:11:46.406696 | controller | server = self._create_server(name=name)
2018-08-20 13:11:46.406771 | controller | File "/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_dns_integration.py", line 94, in _create_server
2018-08-20 13:11:46.406799 | controller | constants.SERVER_STATUS_ACTIVE)
2018-08-20 13:11:46.406844 | controller | File "tempest/common/waiters.py", line 96, in wait_for_server_status
2018-08-20 13:11:46.406893 | controller | raise lib_exc.TimeoutException(message)
2018-08-20 13:11:46.406938 | controller | tempest.lib.exceptions.TimeoutException: Request timed out
2018-08-20 13:11:46.407077 | controller | Details: (DNSIntegrationTests:test_server_with_fip) Server 9797a641-8925-4f74-abbf-c6111d47f91d failed to reach ACTIVE status and task state "None" within the required time (784 s). Current status: BUILD. Current task state: spawning.

Logstash link: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22DNSIntegrationTests%3Atest_server_with_fip%5C%22%20AND%20%20%20message%3A%5C%22failed%20to%20reach%20ACTIVE%20status%20and%20task%20state%5C%22%20AND%20%20%20tags%3A%5C%22console%5C%22

40 hits in 7 days, all failures

description: updated
Miguel Lavalle (minsel)
Changed in neutron:
assignee: nobody → Miguel Lavalle (minsel)
Hongbin Lu (hongbin.lu)
Changed in neutron:
status: New → Confirmed
tags: added: gate-failure
Miguel Lavalle (minsel)
tags: added: l3-ipam-dhcp
Revision history for this message
Miguel Lavalle (minsel) wrote :
Revision history for this message
Miguel Lavalle (minsel) wrote :

56 hits to be precise

Revision history for this message
Miguel Lavalle (minsel) wrote :
Download full text (7.1 KiB)

In the execution of DNSIntegrationTests:test_server_with_fip (https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/scenario/test_dns_integration.py#L113):

1) The instance ID 9797a641-8925-4f74-abbf-c6111d47f91d has a port with ID e14bdbb1-ca9a-44b8-9f3b-a296f01a71fd

2) Once the instance POST succeeds, the test waits for it to become active: https://github.com/openstack/neutron-tempest-plugin/blob/master/neutron_tempest_plugin/scenario/test_dns_integration.py#L92

3) In the Tempest log we can see that the wait for the instance to become active timeouts and a request to delete it is sent. Note that the instance never gets IP addresses:

2018-08-20 13:04:52,793 2367 INFO [tempest.lib.common.rest_client] Request (DNSIntegrationTests:test_server_with_fip): 200 GET http://213.32.74.39/compute/v2.1/servers/9797a641-8925-4f74-abbf-c6111d47f91d 0.227s
2018-08-20 13:04:52,794 2367 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': '<omitted>'}
        Body: None
    Response - Headers: {u'content-type': 'application/json', 'content-location': 'http://213.32.74.39/compute/v2.1/servers/9797a641-8925-4f74-abbf-c6111d47f91d', u'date': 'Mon, 20 Aug 2018 13:04:52 GMT', u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', u'x-openstack-request-id': 'req-df98af82-58fa-47a9-83b8-85f840184577', u'x-compute-request-id': 'req-df98af82-58fa-47a9-83b8-85f840184577', u'content-length': '1328', u'connection': 'close', 'status': '200', u'openstack-api-version': 'compute 2.1', u'x-openstack-nova-api-version': '2.1', u'server': 'Apache/2.4.18 (Ubuntu)'}
        Body: {"server": {"OS-EXT-STS:task_state": "spawning", "addresses": {}, "links": [{"href": "http://213.32.74.39/compute/v2.1/servers/9797a641-8925-4f74-abbf-c6111d47f91d", "rel": "self"}, {"href": "http://213.32.74.39/compute/servers/9797a641-8925-4f74-abbf-c6111d47f91d", "rel": "bookmark"}], "image": {"id": "4613d815-8eed-440c-a630-85a91292514c", "links": [{"href": "http://213.32.74.39/compute/images/4613d815-8eed-440c-a630-85a91292514c", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "building", "OS-SRV-USG:launched_at": null, "flavor": {"id": "d1", "links": [{"href": "http://213.32.74.39/compute/flavors/d1", "rel": "bookmark"}]}, "id": "9797a641-8925-4f74-abbf-c6111d47f91d", "security_groups": [{"name": "default"}], "user_id": "4abf0780c8a04aefa22e94d055d6d082", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "status": "BUILD", "updated": "2018-08-20T12:51:50Z", "hostId": "a771a3451772b8b848f9f12576e3243ea69e9ca77ccf82c66d34aa0e", "OS-SRV-USG:terminated_at": null, "key_name": "tempest-keypair-test-832974923", "name": "tempest-server-test-1466220015", "created": "2018-08-20T12:51:48Z", "tenant_id": "df130fc6f8a64ab1b877c39e32e8a1d6", "os-extended-volumes:volumes_attached": [], "metadata": {}}}

2018-08-20 13:04:53,178 2367 INFO [tempest.lib.common.rest_client] Request (DNSIntegrationTests:_run_cleanups): 204 DELETE http://213.32.74.39/compute/v2.1/se...

Read more...

Changed in neutron:
importance: Undecided → High
Miguel Lavalle (minsel)
summary: - neutron_tempest_plugin DNS integration tests fail with "Server [UUID]
- failed to reach ACTIVE status and task state "None" within the required
- time ([INTEGER] s). Current status: BUILD. Current task state:
- spawning."
+ Tests fail with "Server [UUID] failed to reach ACTIVE status and task
+ state "None" within the required time ([INTEGER] s). Current status:
+ BUILD. Current task state: spawning."
Revision history for this message
Miguel Lavalle (minsel) wrote :

Not only tests in job neutron-tempest-plugin-designate-scenario are failing with "failed to reach ACTIVE status and task state "None" within the required time (xxx s). Current status: BUILD. Current task state: spawning". Same failure can be found with other jobs using the following Kibana query:

http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22failed%20to%20reach%20ACTIVE%20status%20and%20task%20state%5C%22%20AND%20message:%5C%22within%20the%20required%20time%5C%22%20AND%20project:%5C%22openstack%2Fneutron%5C%22%20AND%20tags:%5C%22console%5C%22&from=7d

I found this failure in neutron-tempest-plugin-dvr-multinode-scenario, neutron-tempest-plugin-scenario-linuxbridge, tempest-full, neutron-tempest-multinode-full, neutron-tempest-dvr.

Revision history for this message
Miguel Lavalle (minsel) wrote :

This is a successful case of DNSIntegrationTests:test_server_with_fip: http://paste.openstack.org/show/729357/ (http://logs.openstack.org/71/516371/8/check/neutron-tempest-plugin-designate-scenario/4a1503c/controller/logs/screen-n-cpu.txt.gz#_Sep_02_03_14_26_259660). Note that soon after the the compute manager receives event 'network-vif-plugged', the instance becomes active:

Sep 02 03:14:28.204857 ubuntu-xenial-rax-iad-0001709993 nova-compute[25251]: INFO nova.compute.manager [None req-6f8f7d53-56c8-4dcd-92f4-1bc9186e9a34 tempest-DNSIntegrationTests-857454080 tempest-DNSIntegrationTests-857454080] [instance: 68bba825-9c49-45ab-b833-3bebf1da2876] Took 24.13 seconds to build instance.
Sep 02 03:14:28.229268 ubuntu-xenial-rax-iad-0001709993 nova-compute[25251]: DEBUG oslo_concurrency.lockutils [None req-6f8f7d53-56c8-4dcd-92f4-1bc9186e9a34 tempest-DNSIntegrationTests-857454080 tempest-DNSIntegrationTests-857454080] Lock "68bba825-9c49-45ab-b833-3bebf1da2876" released by "nova.compute.manager._locked_do_build_and_run_instance" :: held 24.280s {{(pid=25251) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285}}
Sep 02 03:14:28.425080 ubuntu-xenial-rax-iad-0001709993 nova-compute[25251]: DEBUG nova.compute.manager [None req-c8343f9e-cc06-4e5a-8325-c88f4fc5240a None None] [instance: 68bba825-9c49-45ab-b833-3bebf1da2876] Checking state {{(pid=25251) _get_power_state /opt/stack/nova/nova/compute/manager.py:1260}}
Sep 02 03:14:28.428718 ubuntu-xenial-rax-iad-0001709993 nova-compute[25251]: DEBUG nova.compute.manager [None req-c8343f9e-cc06-4e5a-8325-c88f4fc5240a None None] [instance: 68bba825-9c49-45ab-b833-3bebf1da2876] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 {{(pid=25251) handle_lifecycle_event /opt/stack/nova/nova/compute/manager.py:1117}}

Revision history for this message
Slawek Kaplonski (slaweq) wrote :
Revision history for this message
Miguel Lavalle (minsel) wrote :

In contrast, this is a failed case of DNSIntegrationTests:test_server_with_fip: http://paste.openstack.org/show/729358/ (http://logs.openstack.org/94/592194/5/check/neutron-tempest-plugin-designate-scenario/99c0849/controller/logs/screen-n-cpu.txt.gz#_Aug_31_13_11_28_206569). Note that soon after the the compute manager receives event 'network-vif-plugged', "the instance has a pending task (spawning). Skip". It never becomes active

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

I also checked and it looks that similar issues happens not only in Neutron jobs.

Miguel Lavalle (minsel)
Changed in neutron:
importance: High → Critical
Revision history for this message
Miguel Lavalle (minsel) wrote :
Revision history for this message
Miguel Lavalle (minsel) wrote :

For the query in #9 to be meaningful, after kiabana returns the result, ask it to show the project

Revision history for this message
Matt Riedemann (mriedem) wrote :

> "the instance has a pending task (spawning). Skip".

That's unrelated from a periodic task that runs in nova-compute to look for discrepancies between what the hypervisor reports as guests running on it vs what's in the nova database. As noted in the log message, it skips instances that have a task_state set (spawning in this case).

--

Looking at the original failed job link, nova-compute starts waiting for the network-vif-plugged event here:

http://logs.openstack.org/74/591074/4/check/neutron-tempest-plugin-designate-scenario/d02f171/controller/logs/screen-n-cpu.txt.gz#_Aug_20_12_52_04_659588

Aug 20 12:52:04.659588 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: DEBUG nova.compute.manager [None req-e3fd2700-621a-4a80-b16b-03d6018efbf2 tempest-DNSIntegrationTests-199337017 tempest-DNSIntegrationTests-199337017] [instance: 9797a641-8925-4f74-abbf-c6111d47f91d] Preparing to wait for external event network-vif-plugged-e14bdbb1-ca9a-44b8-9f3b-a296f01a71fd {{(pid=27062) prepare_for_instance_event /opt/stack/nova/nova/compute/manager.py:328}}

The libvirt driver then plugs the vif on the host:

vif={"profile": {}, "ovs_interfaceid": "e14bdbb1-ca9a-44b8-9f3b-a296f01a71fd", "preserve_on_delete": true, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.1.0.6"}], "version": 4, "meta": {"dhcp_server": "10.1.0.2"}, "dns": [], "routes": [], "cidr": "10.1.0.0/28", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.1.0.1"}}], "meta": {"injected": false, "tunneled": true, "tenant_id": "df130fc6f8a64ab1b877c39e32e8a1d6", "physical_network": null, "mtu": 1400}, "id": "f7495a16-b458-49e3-b219-b5baa539df75", "label": "tempest-test-network--304750879"}, "devname": "tape14bdbb1-ca", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "datapath_type": "system", "ovs_hybrid_plug": false}, "address": "fa:16:3e:7a:37:d1", "active": false, "type": "ovs", "id": "e14bdbb1-ca9a-44b8-9f3b-a296f01a71fd", "qbg_params": null}

n-cpu receives the network-vif-plugged event here:

http://logs.openstack.org/74/591074/4/check/neutron-tempest-plugin-designate-scenario/d02f171/controller/logs/screen-n-cpu.txt.gz#_Aug_20_12_52_07_626394

Aug 20 12:52:07.626394 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: DEBUG nova.compute.manager [None req-633098b6-6b29-473b-a7d1-76fbf8648e7b service nova] [instance: 9797a641-8925-4f74-abbf-c6111d47f91d] Received event network-vif-plugged-e14bdbb1-ca9a-44b8-9f3b-a296f01a71fd {{(pid=27062) external_instance_event /opt/stack/nova/nova/compute/manager.py:8077}}

And then after that yeah I don't see nova saying the VM is active...I wonder if we're dropping the event somehow with an eventlet greenthread switch or something - there were some recent eventlet updates in upper-constraints, but I wouldn't think those are also on stable branches.

Do we know when this started occurring in the DNS job?

Revision history for this message
Matt Riedemann (mriedem) wrote :

I do see this in the n-cpu logs after nova-compute receives the network-vif-plugged event:

Aug 20 12:52:07.637139 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: DEBUG nova.virt.libvirt.driver [None req-e3fd2700-621a-4a80-b16b-03d6018efbf2 tempest-DNSIntegrationTests-199337017 tempest-DNSIntegrationTests-199337017] [instance: 9797a641-8925-4f74-abbf-c6111d47f91d] Guest created on hypervisor {{(pid=27062) spawn /opt/stack/nova/nova/virt/libvirt/driver.py:3076}}

Which means we actually do process the event because we get out of the libvirt driver's _create_domain_and_network method.

What I'm not seeing is this log message: "Instance spawned successfully", which means the guest power state doesn't go to 'running' and that's where we're timing out.

I wonder if something has changed in libvirt/qemu, like distro patches, that would impact how the notifications (lifecycle events) are processed and causing nova to think the guest is not running.

Revision history for this message
Matt Riedemann (mriedem) wrote :

This is the package version being used on master and rocky:

ii libvirt-bin 4.0.0-1ubuntu8.3~cloud0
ii qemu-system 1:2.11+dfsg-1ubuntu7.4~cloud0

And queens:

ii libvirt-bin 3.6.0-1ubuntu6.8~cloud0
ii qemu-system 1:2.10+dfsg-0ubuntu3.8~cloud1

But if a patch to one of those was made in ubuntu we'd be pulling it in and might explain why we're now seeing this on multiple branches.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Revision history for this message
Matt Riedemann (mriedem) wrote :

When we timeout, the instance power_state is 0:

"OS-EXT-STS:power_state": 0

Which maps to NOSTATE, so yeah something is messed up in how the hypervisor is reporting the guest power state.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Here are the state changes I'm seeing in that failure log while n-cpu is waiting for the network-vif-plugged event and after it's plugged the vif on the host:

Aug 20 12:52:06.006477 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1534769526.01, 9797a641-8925-4f74-abbf-c6111d47f91d => Started> {{(pid=27062) emit_event /opt/stack/nova/nova/virt/driver.py:1545}}
Aug 20 12:52:06.006973 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: INFO nova.compute.manager [-] [instance: 9797a641-8925-4f74-abbf-c6111d47f91d] VM Started (Lifecycle Event)

--

Aug 20 12:52:06.065415 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: DEBUG nova.virt.driver [None req-27237dca-57e7-488d-aa47-f8f53ad60fc9 None None] Emitting event <LifecycleEvent: 1534769526.01, 9797a641-8925-4f74-abbf-c6111d47f91d => Paused> {{(pid=27062) emit_event /opt/stack/nova/nova/virt/driver.py:1545}}
Aug 20 12:52:06.065642 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: INFO nova.compute.manager [None req-27237dca-57e7-488d-aa47-f8f53ad60fc9 None None] [instance: 9797a641-8925-4f74-abbf-c6111d47f91d] VM Paused (Lifecycle Event)

--

Aug 20 12:52:07.636565 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: DEBUG nova.virt.driver [None req-27237dca-57e7-488d-aa47-f8f53ad60fc9 None None] Emitting event <LifecycleEvent: 1534769527.63, 9797a641-8925-4f74-abbf-c6111d47f91d => Resumed> {{(pid=27062) emit_event /opt/stack/nova/nova/virt/driver.py:1545}}
Aug 20 12:52:07.636911 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: INFO nova.compute.manager [None req-27237dca-57e7-488d-aa47-f8f53ad60fc9 None None] [instance: 9797a641-8925-4f74-abbf-c6111d47f91d] VM Resumed (Lifecycle Event)

--

Aug 20 12:52:07.680184 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: DEBUG nova.virt.driver [None req-27237dca-57e7-488d-aa47-f8f53ad60fc9 None None] Emitting event <LifecycleEvent: 1534769527.64, 9797a641-8925-4f74-abbf-c6111d47f91d => Resumed> {{(pid=27062) emit_event /opt/stack/nova/nova/virt/driver.py:1545}}
Aug 20 12:52:07.680790 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: INFO nova.compute.manager [None req-27237dca-57e7-488d-aa47-f8f53ad60fc9 None None] [instance: 9797a641-8925-4f74-abbf-c6111d47f91d] VM Resumed (Lifecycle Event)

--

Aug 20 12:52:07.721266 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: DEBUG nova.virt.driver [None req-27237dca-57e7-488d-aa47-f8f53ad60fc9 None None] Emitting event <LifecycleEvent: 1534769527.64, 9797a641-8925-4f74-abbf-c6111d47f91d => Paused> {{(pid=27062) emit_event /opt/stack/nova/nova/virt/driver.py:1545}}
Aug 20 12:52:07.721592 ubuntu-xenial-ovh-gra1-0001411231 nova-compute[27062]: INFO nova.compute.manager [None req-27237dca-57e7-488d-aa47-f8f53ad60fc9 None None] [instance: 9797a641-8925-4f74-abbf-c6111d47f91d] VM Paused (Lifecycle Event)

And then that's it. So it looks like we go from resumed (running) back to paused in the guest and then never get back to running.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Looking at the libvirtd logs, I'm seeing:

2018-08-20 12:52:07.630+0000: 27153: debug : qemuProcessHandleResume:807 : Transitioned guest instance-00000001 out of paused into resumed state

2018-08-20 12:52:07.631+0000: 27153: debug : qemuProcessHandleStop:755 : Transitioned guest instance-00000001 to paused state, reason unknown

I think ^ is where things go wrong and we don't recover.

There are errors in the qemu domain log:

http://logs.openstack.org/74/591074/4/check/neutron-tempest-plugin-designate-scenario/d02f171/controller/logs/libvirt/qemu/instance-00000001_log.txt.gz

KVM: entry failed, hardware error 0x0

If you look at the node providers that these failures show up on, they are primarily OVH:

  Value Action Count / 82 events
1. ovh-bhs1 48
2. ovh-gra1 32
3. limestone-regionone 2

And the bug is because the job is using virt_type=kvm rather than qemu:

http://logs.openstack.org/74/591074/4/check/neutron-tempest-plugin-designate-scenario/d02f171/controller/logs/etc/nova/nova-cpu_conf.txt.gz

[libvirt]
live_migration_uri = qemu+ssh://stack@%s/system
cpu_mode = none
virt_type = kvm

Running nested virt with kvm is a known issue, so that's definitely the bug.

Revision history for this message
Matt Riedemann (mriedem) wrote :
Changed in nova:
status: New → Invalid
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/601438

Changed in neutron:
assignee: Miguel Lavalle (minsel) → Slawek Kaplonski (slaweq)
status: Confirmed → In Progress
Revision history for this message
Akihiro Motoki (amotoki) wrote :

According to the logstash query described in the bug description, it fails with the following node_provider: ovs-bhs1, ovs-gra1, limestone-regionone.
If the situation changes on these nodes, we can say the fix works correctly.

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Most important patch to fix that is now merged: https://review.openstack.org/#/c/601439/
We will still need https://review.openstack.org/#/c/601438/ but it's not as important as it's related only to non-voting jobs

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/601438
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=91d6151a4d732bca802e0c0613cefa35d6d36698
Submitter: Zuul
Branch: master

commit 91d6151a4d732bca802e0c0613cefa35d6d36698
Author: Slawek Kaplonski <email address hidden>
Date: Mon Sep 10 22:12:01 2018 +0000

    Revert "Use nested virt in scenario jobs"

    Nested virt doesn't work well on infra nodes.
    This reverts commit 73f111e7e860218857351309d59a3a329e18850b.

    Change-Id: Ie76684e656066834fc0600d618004ca21ccb1223
    Closes-Bug: #1788006

Changed in neutron:
status: In Progress → Fix Released
tags: added: neutron-proactive-backport-potential
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 14.0.0.0b1

This issue was fixed in the openstack/neutron 14.0.0.0b1 development milestone.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron-tempest-plugin 0.3.0

This issue was fixed in the openstack/neutron-tempest-plugin 0.3.0 release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.