neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot

Bug #1835807 reported by Anujeyan Manokeran
38
This bug affects 4 people
Affects Status Importance Assigned to Milestone
StarlingX
Won't Fix
High
Joseph Richard

Bug Description

Brief Description
-----------------

  neutron-l3-agent and neutron-dhcp-agent was down and never recovered after compute-3 was force rebooted. Below shows before it was up and after compute got rebooted it went down.

Send 'openstack --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne network agent list'
[2019-07-07 04:53:48,062] 423 DEBUG MainThread ssh.expect :: Output:
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| 02266b80-8c4e-40b7-a9fa-72d6695a1d35 | Metadata agent | compute-4 | None | :-) | UP | neutron-metadata-agent |
| 0398ee2d-75b8-4493-8f2a-76e9efe13946 | DHCP agent | compute-2 | nova | :-) | UP | neutron-dhcp-agent |
| 0488ea35-3833-4e95-8c89-57cb3badefbe | L3 agent | compute-3 | nova | :-) | UP | neutron-l3-agent |
| 084b585c-3b5a-45df-90be-51f66a1745d1 | NIC Switch agent | compute-2 | None | :-) | UP | neutron-sriov-nic-agent |
| 0dc1dd0e-928d-4827-926f-923c3b645b33 | L3 agent | compute-0 | nova | :-) | UP | neutron-l3-agent |
| 167b07dc-76ad-44f2-acd1-ae73f977af8d | NIC Switch agent | compute-4 | None | :-) | UP | neutron-sriov-nic-agent |
| 1b8f8bb7-a9cd-4bd7-b429-82eb6a57e5ed | Open vSwitch agent | compute-0 | None | :-) | UP | neutron-openvswitch-agent |
| 1d09fdbd-b6bf-468f-9162-21e1984cca40 | L3 agent | compute-1 | nova | :-) | UP | neutron-l3-agent |
| 2ac2053c-073c-4d75-bbac-6367937b9b7b | Open vSwitch agent | compute-3 | None | :-) | UP | neutron-openvswitch-agent |
| 2b3e4e26-7364-4d55-aac8-9e75d5bd517c | NIC Switch agent | compute-3 | None | :-) | UP | neutron-sriov-nic-agent |
| 45a5a758-df01-4d74-b197-f60c023c8342 | Open vSwitch agent | compute-2 | None | :-) | UP | neutron-openvswitch-agent |
| 4684d7f7-6606-4888-ab83-c97867f81cc1 | DHCP agent | compute-3 | nova | :-) | UP | neutron-dhcp-agent |
| 46ad03ef-5656-4911-a1dc-20dd48f0c0ae | NIC Switch agent | compute-0 | None | :-) | UP | neutron-sriov-nic-agent |
| 4c4413b2-72ce-4dd0-a609-d35dca96a33c | Metadata agent | compute-0 | None | :-) | UP | neutron-metadata-agent |
| 5593bdb5-8fb0-4589-9417-4eea8200af18 | DHCP agent | compute-4 | nova | :-) | UP | neutron-dhcp-agent |
| 58290c99-d3b3-43ac-9d32-84e4197d52f4 | L3 agent | compute-2 | nova | :-) | UP | neutron-l3-agent |
| 603e727d-0f28-4010-84ab-31fc799f67c1 | Metadata agent | compute-3 | None | :-) | UP | neutron-metadata-agent |
| 639282a0-0f1b-4ef5-8352-a3ea90a0b797 | Open vSwitch agent | compute-4 | None | :-) | UP | neutron-openvswitch-agent |
| 6e13479a-565d-44a1-8b54-30280342362d | Metadata agent | compute-2 | None | :-) | UP | neutron-metadata-agent |
| 79ac04e7-4f7b-4015-be3c-9ff0a411914b | Open vSwitch agent | compute-1 | None | :-) | UP | neutron-openvswitch-agent |
| 844028c3-b522-4e12-83dd-093cb35e7b04 | L3 agent | compute-4 | nova | :-) | UP | neutron-l3-agent |
| 864fbc99-cf8b-422f-bb7b-2ac942340629 | NIC Switch agent | compute-1 | None | :-) | UP | neutron-sriov-nic-agent |
| a647d5ea-012a-4940-8dc4-4db4cd17b619 | DHCP agent | compute-0 | nova | :-) | UP | neutron-dhcp-agent |
| ad20d257-e40c-48f8-ae17-088c6cd64b18 | Metadata agent | compute-1 | None | :-) | UP | neutron-metadata-agent |
| caad4860-bf10-48b6-bd2a-ec6fb1b10c2a | DHCP agent | compute-1 | nova | :-) | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+

 Send 'system --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://192.168.222.2:5000/v3 --os-user-domain-name Default --os-project-domain-name Default --os-endpoint-type internalURL --os-region-name RegionOne host-list'
[2019-07-07 04:54:57,648] 423 DEBUG MainThread ssh.expect :: Output:
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | compute-0 | worker | unlocked | enabled | available |
| 3 | compute-1 | worker | unlocked | enabled | available |
| 4 | compute-2 | worker | unlocked | enabled | available |
| 5 | compute-3 | worker | unlocked | enabled | available |
| 6 | compute-4 | worker | unlocked | enabled | available |
| 7 | controller-1 | controller | unlocked | enabled | available |
| 8 | storage-0 | storage | unlocked | enabled | available |
| 9 | storage-1 | storage | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
controller-1:~$
[2019-07-07 04:54:57,649] 301 DEBUG MainThread ssh.send :: Send 'echo $?'
[2019-07-07 04:54:57,751] 423 DEBUG MainThread ssh.expect :: Output:
0
controller-1:~$
[2019-07-07 04:54:57,752] 282 DEBUG MainThread system_helper.get_hosts:: Filtered hosts: ['controller-0', 'compute-0', 'compute-1', 'compute-2', 'compute-3', 'compute-4', 'controller-1', 'storage-0', 'storage-1']
[2019-07-07 04:54:57,752] 1534 DEBUG MainThread ssh.get_active_controller:: Getting active controller client for wcp_113_121
[2019-07-07 04:54:57,752] 466 DEBUG MainThread ssh.exec_cmd:: Executing command...
[2019-07-07 04:54:57,753] 301 DEBUG MainThread ssh.send :: Send 'whoami'
[2019-07-07 04:54:57,857] 423 DEBUG MainThread ssh.expect :: Output:
sysadmin
controller-1:~$
[2019-07-07 04:54:57,857] 301 DEBUG MainThread ssh.send :: Send ''
[2019-07-07 04:54:57,959] 423 DEBUG MainThread ssh.expect :: Output:
controller-1:~$
[2019-07-07 04:54:57,960] 1185 INFO MainThread ssh.connect :: Attempt to connect to compute-3 from 128.224.150.45...
[2019-07-07 04:54:57,960] 301 DEBUG MainThread ssh.send :: Send '/usr/bin/ssh -o RSAAuthentication=no -o PubkeyAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null sysadmin@compute-3'
[2019-07-07 04:54:58,131] 423 DEBUG MainThread ssh.expect :: Output:
Warning: Permanently added 'compute-3,192.168.222.103' (ECDSA) to the list of known hosts.
sysadmin@compute-3's password:
[2019-07-07 04:54:58,131] 301 DEBUG MainThread ssh.send :: Send 'Li69nux*'
[2019-07-07 04:54:58,398] 423 DEBUG MainThread ssh.expect :: Output:
Last login: Sat Jul 6 23:06:06 2019 from controller-0
/etc/motd.d/00-header:

compute-3:~$
[2019-07-07 04:54:58,712] 165 INFO MainThread host_helper.reboot_hosts:: Rebooting compute-3
[2019-07-07 04:54:58,713] 301 DEBUG MainThread ssh.send :: Send 'sudo reboot -f'
[2019-07-07 04:54:58,826] 423 DEBUG MainThread ssh.expect :: Output:
Password:

 Send 'system --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://192.168.222.2:5000/v3 --os-user-domain-name Default --os-project-domain-name Default --os-endpoint-type internalURL --os-region-name RegionOne host-list'
[2019-07-07 04:55:40,600] 423 DEBUG MainThread ssh.expect :: Output:
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | compute-0 | worker | unlocked | enabled | available |
| 3 | compute-1 | worker | unlocked | enabled | available |
| 4 | compute-2 | worker | unlocked | enabled | available |
| 5 | compute-3 | worker | unlocked | disabled | offline |
| 6 | compute-4 | worker | unlocked | enabled | available |
| 7 | controller-1 | controller | unlocked | enabled | available |
| 8 | storage-0 | storage | unlocked | enabled | available |
| 9 | storage-1 | storage | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+

end 'system --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://192.168.222.2:5000/v3 --os-user-domain-name Default --os-project-domain-name Default --os-endpoint-type internalURL --os-region-name RegionOne host-list'
[2019-07-07 05:00:05,427] 423 DEBUG MainThread ssh.expect :: Output:
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | compute-0 | worker | unlocked | enabled | available |
| 3 | compute-1 | worker | unlocked | enabled | available |
| 4 | compute-2 | worker | unlocked | enabled | available |
| 5 | compute-3 | worker | unlocked | enabled | available |
| 6 | compute-4 | worker | unlocked | enabled | available |
| 7 | controller-1 | controller | unlocked | enabled | available |
| 8 | storage-0 | storage | unlocked | enabled | available |
| 9 | storage-1 | storage | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+

Send 'openstack --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne hypervisor list'
[2019-07-07 05:00:43,303] 423 DEBUG MainThread ssh.expect :: Output:
+----+---------------------+-----------------+-----------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+-----------------+-------+
| 4 | compute-1 | QEMU | 192.168.223.77 | up |
| 7 | compute-2 | QEMU | 192.168.223.176 | up |
| 10 | compute-0 | QEMU | 192.168.223.171 | up |
| 13 | compute-3 | QEMU | 192.168.223.237 | up |
| 16 | compute-4 | QEMU | 192.168.223.64 | up |
+----+---------------------+-----------------+-----------------+-------+

Send 'openstack --os-username 'admin' --os-password 'Li69nux*' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne network agent list'
[2019-07-07 05:03:19,248] 423 DEBUG MainThread ssh.expect :: Output:
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| 02266b80-8c4e-40b7-a9fa-72d6695a1d35 | Metadata agent | compute-4 | None | :-) | UP | neutron-metadata-agent |
| 0398ee2d-75b8-4493-8f2a-76e9efe13946 | DHCP agent | compute-2 | nova | :-) | UP | neutron-dhcp-agent |
| 0488ea35-3833-4e95-8c89-57cb3badefbe | L3 agent | compute-3 | nova | XXX | DOWN | neutron-l3-agent |
| 084b585c-3b5a-45df-90be-51f66a1745d1 | NIC Switch agent | compute-2 | None | :-) | UP | neutron-sriov-nic-agent |
| 0dc1dd0e-928d-4827-926f-923c3b645b33 | L3 agent | compute-0 | nova | :-) | UP | neutron-l3-agent |
| 167b07dc-76ad-44f2-acd1-ae73f977af8d | NIC Switch agent | compute-4 | None | :-) | UP | neutron-sriov-nic-agent |
| 1b8f8bb7-a9cd-4bd7-b429-82eb6a57e5ed | Open vSwitch agent | compute-0 | None | :-) | UP | neutron-openvswitch-agent |
| 1d09fdbd-b6bf-468f-9162-21e1984cca40 | L3 agent | compute-1 | nova | :-) | UP | neutron-l3-agent |
| 2ac2053c-073c-4d75-bbac-6367937b9b7b | Open vSwitch agent | compute-3 | None | :-) | UP | neutron-openvswitch-agent |
| 2b3e4e26-7364-4d55-aac8-9e75d5bd517c | NIC Switch agent | compute-3 | None | :-) | UP | neutron-sriov-nic-agent |
| 45a5a758-df01-4d74-b197-f60c023c8342 | Open vSwitch agent | compute-2 | None | :-) | UP | neutron-openvswitch-agent |
| 4684d7f7-6606-4888-ab83-c97867f81cc1 | DHCP agent | compute-3 | nova | :-) | DOWN | neutron-dhcp-agent |
| 46ad03ef-5656-4911-a1dc-20dd48f0c0ae | NIC Switch agent | compute-0 | None | :-) | UP | neutron-sriov-nic-agent |
| 4c4413b2-72ce-4dd0-a609-d35dca96a33c | Metadata agent | compute-0 | None | :-) | UP | neutron-metadata-agent |
| 5593bdb5-8fb0-4589-9417-4eea8200af18 | DHCP agent | compute-4 | nova | :-) | UP | neutron-dhcp-agent |
| 58290c99-d3b3-43ac-9d32-84e4197d52f4 | L3 agent | compute-2 | nova | :-) | UP | neutron-l3-agent |
| 603e727d-0f28-4010-84ab-31fc799f67c1 | Metadata agent | compute-3 | None | :-) | UP | neutron-metadata-agent |
| 639282a0-0f1b-4ef5-8352-a3ea90a0b797 | Open vSwitch agent | compute-4 | None | :-) | UP | neutron-openvswitch-agent |
| 6e13479a-565d-44a1-8b54-30280342362d | Metadata agent | compute-2 | None | :-) | UP | neutron-metadata-agent |
| 79ac04e7-4f7b-4015-be3c-9ff0a411914b | Open vSwitch agent | compute-1 | None | :-) | UP | neutron-openvswitch-agent |
| 844028c3-b522-4e12-83dd-093cb35e7b04 | L3 agent | compute-4 | nova | :-) | UP | neutron-l3-agent |
| 864fbc99-cf8b-422f-bb7b-2ac942340629 | NIC Switch agent | compute-1 | None | :-) | UP | neutron-sriov-nic-agent |
| a647d5ea-012a-4940-8dc4-4db4cd17b619 | DHCP agent | compute-0 | nova | :-) | UP | neutron-dhcp-agent |
| ad20d257-e40c-48f8-ae17-088c6cd64b18 | Metadata agent | compute-1 | None | :-) | UP | neutron-metadata-agent |
| caad4860-bf10-48b6-bd2a-ec6fb1b10c2a | DHCP agent | compute-1 | nova | :-) | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
controller-1:~$
[2019-07-07 05:03:19,248] 301 DEBUG MainThread ssh.send :: Send 'echo $?'

Severity
--------
Major
Steps to Reproduce
------------------
1. Verify system health by checking alarm, hosts in available state and network agent list
2. force reboot compute wait for the hosts become available
3. As description says neutron-l3-agent and neutron-dhcp-agent never came up.

Expected Behavior
------------------
After force reboot network agent list shows all the binary up.
Actual Behavior
----------------
As per description 2 binary’s are not up.
Reproducibility
---------------
Tried only once in latest load.

System Configuration
--------------------
storage system
Branch/Pull Time/Commit
-----------------------
20190706T013000Z
Last Pass
---------

Not known
Timestamp/Logs
--------------
2019-07-07T04:54:58.000
Test Activity
-------------
Regression test

description: updated
Revision history for this message
Ghada Khalil (gkhalil) wrote :

please add the collect logs

tags: added: stx.networking
Changed in starlingx:
status: New → Incomplete
Numan Waheed (nwaheed)
tags: added: stx.retestneeded
Revision history for this message
Anujeyan Manokeran (anujeyan) wrote :
Revision history for this message
Ghada Khalil (gkhalil) wrote :

Marking as stx.2.0 gating; the VIM should recover the neutron agents/routers upon host failure

tags: added: stx.2.0 stx.nfv
Changed in starlingx:
importance: Undecided → High
status: Incomplete → Triaged
assignee: nobody → Kevin Smith (kevin.smith.wrs)
Numan Waheed (nwaheed)
tags: added: stx.regression
Revision history for this message
Kevin Smith (kevin.smith.wrs) wrote :

Logs show the successful triggering of network and router reschedule as compute-3 goes down, so all is good there. It seems however, that the L3 agent on compute-3 never recovers -- probably a problem with the pod. Timestamps in the neturon-l3-agent container log on compute-3 end before the reboot, so it is probably from the old pod pre-reboot. I have suggested a change to the test to gather the state of the pods on compute nodes in case this problem re-occurs.

Revision history for this message
Ghada Khalil (gkhalil) wrote :

@Kevin, So there isn't enough info to further investigate this issue? Peng needs to reproduce and make sure the pod information is collected? Is that correct?

@Peng, Are you able to provide the requested information?

Changed in starlingx:
status: Triaged → Incomplete
Revision history for this message
Kevin Smith (kevin.smith.wrs) wrote :

The VIM won't initiate any action to rebalance the routers/networks to a newly available host until all agents are alive, so the focus should be on why the neutron-l3-agent pod did not recover after the force reboot, not the VIM. Taking a second look however, I do see logs from a neutron-l3-agent pod on compute-3 after the reboot, so it appears the pod did come up (my mistake). The logs indicate that the l3 agent was unable to contact the neutron server so that is why it likely showed as dead (Only logs present are like those below, as timeout is incremented until 10 minutes, after which the logs appear every 10 minutes):

{"log":"2019-07-07 05:04:02.143 20 WARNING neutron_lib.rpc [req-fe11a876-7c24-4cc3-a33d-8d41545f1ed0 - - - - -] Increasing timeout for get_host_ha_router_count calls to 240 seconds. Restart the agent to restore it to the default value.: MessagingTimeout: Timed out waiting for a reply to message ID 94c490c3248c4457af27c0cf5bc75537\n","stream":"stdout","time":"2019-07-07T05:04:02.14397681Z"}
{"log":"2019-07-07 05:04:03.708 20 WARNING neutron.agent.l3.agent [req-fe11a876-7c24-4cc3-a33d-8d41545f1ed0 - - - - -] l3-agent cannot contact neutron server to retrieve HA router count. Check connectivity to neutron server. Retrying... Detailed message: Timed out waiting for a reply to message ID 94c490c3248c4457af27c0cf5bc75537.: MessagingTimeout: Timed out waiting for a reply to message ID 94c490c3248c4457af27c0cf5bc75537\n","stream":"stdout","time":"2019-07-07T05:04:03.708695183Z"}

Revision history for this message
Anujeyan Manokeran (anujeyan) wrote :

I was unable to reproduce this 10 repetitive test . I will monitor the next MTC automation regression suite run as well.

Revision history for this message
Ghada Khalil (gkhalil) wrote :

Thanks Jeyan for the info. I'm going to close this for now as it is not reproducible.

On a related note, there are two bug fixes that will also help with pod recovery in the future:
https://bugs.launchpad.net/starlingx/+bug/1836232 -- merged on 2019-07-15
https://bugs.launchpad.net/starlingx/+bug/1836413 -- currently in review

Changed in starlingx:
status: Incomplete → Invalid
Revision history for this message
Ghada Khalil (gkhalil) wrote :

Bart Wensley has seen this issue on another AIO-DX system (wolfpass-1-2) after force rebooting a controller.

Based on initial investigation by Joseph Richard, it appears that the neutron-server is not receiving RPCs from the agent. It seems like these RPCs are lost by rabbit somehow.

Re-opening this bug as further investigation is required.

Changed in starlingx:
status: Invalid → Confirmed
assignee: Kevin Smith (kevin.smith.wrs) → Joseph Richard (josephrichard)
Revision history for this message
Joseph Richard (josephrichard) wrote :

This was seen again on a standard system (wildcat_63-66) are force-rebooting active controller controller-1.

Ghada Khalil (gkhalil)
summary: neutron-l3-agent and neutron-dhcp-agent never recovered after force
- reboot on compute
+ reboot
Ghada Khalil (gkhalil)
tags: removed: stx.nfv
Revision history for this message
Joseph Richard (josephrichard) wrote :

These RPCs are getting lost inside of rabbitmq.

It appears that this binding [1] exists in rabbitmq, but rabbitmq is ignoring it, and when messages come in on the neutron exchange with topic q-l3-plugin, it does not send them to the q-l3-queue.

I see a similar launchpad [2] opened back in 2017 in the Mirantis OpenStack Project that hasn't been updated since.

By setting the alternative-exchange to q-l3-plugin_fanout[3], the system becomes functional again, which confirms that the issue is with the routing inside of rabbitmq. When I move the alternative exchange away from q-l3-plugin_fanout again, the rpc starts failing again.

I looked for upstream bug reports in oslo.messaging and neutron, but I didn't see any reports of this. I don't see this issue open upstream for rabbitmq-server either.

[1] neutron exchange q-l3-plugin queue q-l3-plugin []
[2] https://bugs.launchpad.net/mos/+bug/1696397
[3] rabbitmqctl -p neutron set_policy AE "neutron" '{"alternate-exchange":"q-l3-plugin_fanout"}'

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to config (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/678044

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: master
Review: https://review.opendev.org/678848

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on config (master)

Change abandoned by Joseph Richard (<email address hidden>) on branch: master
Review: https://review.opendev.org/678848
Reason: Just hit this error during my pre-submission testing. Given that disabling the ha policy hasn't entirely prevented this, I am abandoning this change.

Revision history for this message
Yosief Gebremariam (ygebrema) wrote :
Download full text (4.1 KiB)

A similar issue observed in build:"r/stx.2.0" BUILD_DATE="2019-10-05 15:05:58 +0000"
Lab: WCP-113_121

[sysadmin@controller-0 ~(keystone_admin)]$ openstack network agent list
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-----------+-------------------+-------+-------+---------------------------+
| 05e5f12a-5a10-445c-96b6-2e0a6d39e576 | L3 agent | compute-2 | nova | :-) | UP | neutron-l3-agent |
| 15cf4432-c80c-4655-aba2-1bf8c0f03f3c | L3 agent | compute-4 | nova | :-) | UP | neutron-l3-agent |
| 24f0b8ca-5112-401a-9eac-7f7ddb59bb4c | NIC Switch agent | compute-1 | None | :-) | UP | neutron-sriov-nic-agent |
| 25106680-ab5f-4c40-a8e3-4cb0c5accc46 | Metadata agent | compute-1 | None | :-) | UP | neutron-metadata-agent |
| 270c244f-b85c-497d-b2eb-d0537b1fc943 | NIC Switch agent | compute-2 | None | :-) | UP | neutron-sriov-nic-agent |
| 4e925405-b416-48cb-8b24-64985839caa3 | DHCP agent | compute-4 | nova | :-) | UP | neutron-dhcp-agent |
| 4fe9cd46-3782-44fe-9e78-53f96ff25858 | DHCP agent | compute-1 | nova | :-) | UP | neutron-dhcp-agent |
| 6613b922-0ed5-48d1-b584-aaac5315e9ed | DHCP agent | compute-3 | nova | :-) | DOWN | neutron-dhcp-agent |
| 937feb5f-8368-47a2-aaa4-1020b19087b2 | Open vSwitch agent | compute-4 | None | :-) | UP | neutron-openvswitch-agent |
| 9648b959-ac1b-49cf-b128-3f6ab0cdb71b | DHCP agent | compute-0 | nova | :-) | DOWN | neutron-dhcp-agent |
| 96994864-c24e-46c2-9267-8a0a551582c8 | L3 agent | compute-1 | nova | :-) | UP | neutron-l3-agent |
| b115bddf-ed09-4359-96aa-8b3a7b211566 | Open vSwitch agent | compute-0 | None | :-) | UP | neutron-openvswitch-agent |
| b287cb7b-9738-4da8-8366-ad2ddfeb2b39 | L3 agent | compute-3 | nova | XXX | DOWN | neutron-l3-agent |
| b91d7526-f3fb-44df-a57c-cfbacda172e9 | NIC Switch agent | compute-3 | None | :-) | UP | neutron-sriov-nic-agent |
| c07f6891-ab9a-4797-931a-5a30a71298d8 | DHCP agent | compute-2 | nova | :-) | UP | neutron-dhcp-agent |
| c40129a8-3431-4a77-a135-b52d1b06e1cc | L3 agent | compute-0 | nova | XXX | DOWN | neutron-l3-agent |
| ce713c65-542f-446a-a1b5-59f2b09af535 | Metadata agent | compute-4 | None | :-) | UP | neutron-metadata-agent |
| cfb8ce49-f114-494e-a380-a761786dbc4c | Open vSwitch agent | compute-2 | None | :-) | UP | neutron-openvswitch-agent |
| e14bcf34-b41a-49aa-a2e3-474eb02d9f8e | Open vSwitch agent | compute-1 | None | :-) | UP | neutron-openvswit...

Read more...

Revision history for this message
Ghada Khalil (gkhalil) wrote :

Requesting a re-test of this issue with openstack train on stx master

Changed in starlingx:
status: Confirmed → Incomplete
Revision history for this message
Peng Peng (ppeng) wrote :

Issue was not reproduced on train
2019-11-21_20-00-00
wcp_3-6

Revision history for this message
Ghada Khalil (gkhalil) wrote :

Issue was not reproduced in train. There are no plans to investigate/resolve this for stein / stx.2.0. Closing.

tags: added: stx.3.0
removed: stx.2.0
Changed in starlingx:
status: Incomplete → Won't Fix
tags: added: stx.2.0
removed: stx.3.0
Yang Liu (yliu12)
tags: removed: stx.retestneeded
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Bart Wensley (<email address hidden>) on branch: master
Review: https://review.opendev.org/678044
Reason: This patch has been idle for more than six months. I am abandoning it to keep the review queue sane. If you are still interested in working on this patch, please unabandon it and upload a new patchset.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.