Batch creating VM failed because of timeout error

Bug #2005111 reported by bryan
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
neutron
New
Undecided
Miro Tomaska

Bug Description

When I tried to create 10 instances on a compute node:
# nova list --all-t | grep test_batch
nova CLI is deprecated and will be a removed in a future release
| a501ea6c-bee1-4d1f-9afd-d8eaf0e5964f | test_batch-1 | 0584825def4f48dc94d553c1ecf28921 | ACTIVE | - | Running | rally-test-network=192.168.15.189, 192.168.15.83 |
| f142200d-9432-41b5-b334-760a3b2d7be0 | test_batch-10 | 0584825def4f48dc94d553c1ecf28921 | ACTIVE | - | Running | rally-test-network=192.168.15.245 |
| b6b590cf-0423-4306-8e4a-51c5305affb7 | test_batch-2 | 0584825def4f48dc94d553c1ecf28921 | ERROR | - | NOSTATE | |
| 201dee76-fab2-4ed5-9ca4-f599fc6a10d4 | test_batch-3 | 0584825def4f48dc94d553c1ecf28921 | ACTIVE | - | Running | rally-test-network=192.168.15.110 |
| 6f2f5160-cc49-4060-8c17-bbf77ab10fdc | test_batch-4 | 0584825def4f48dc94d553c1ecf28921 | ACTIVE | - | Running | rally-test-network=192.168.15.242 |
| 83d5584c-fd5c-4976-aa0b-4dd585de08fa | test_batch-5 | 0584825def4f48dc94d553c1ecf28921 | ACTIVE | - | Running | rally-test-network=192.168.15.56 |
| 68cbdbe5-53de-47d5-9d0e-44acb10075af | test_batch-6 | 0584825def4f48dc94d553c1ecf28921 | ERROR | - | NOSTATE | |
| 1413f96f-491b-489b-bb3c-36e81372015b | test_batch-7 | 0584825def4f48dc94d553c1ecf28921 | ACTIVE | - | Running | rally-test-network=192.168.15.158 |
| 77731a84-7da7-4416-9769-c2fd65114a81 | test_batch-8 | 0584825def4f48dc94d553c1ecf28921 | ACTIVE | - | Running | rally-test-network=192.168.15.236 |
| b815d3cc-41b4-4c31-8f93-4dbeddd98a91 | test_batch-9 | 0584825def4f48dc94d553c1ecf28921 | ACTIVE | - | Running | rally-test-network=192.168.15.193 |

# nova interface-list 201dee76-fab2-4ed5-9ca4-f599fc6a10d4
nova CLI is deprecated and will be a removed in a future release
+------------+--------------------------------------+--------------------------------------+----------------+-------------------+-----+
| Port State | Port ID | Net ID | IP addresses | MAC Addr | Tag |
+------------+--------------------------------------+--------------------------------------+----------------+-------------------+-----+
| ACTIVE | b72e161e-03ff-4df7-b1a1-b358a1c98f9c | a0a294ed-812e-451d-bcc0-5879026560e2 | 192.168.15.110 | fa:16:3e:d3:b3:c7 | - |
+------------+--------------------------------------+--------------------------------------+----------------+-------------------+-----+

test_batch-2 and test_batch-3 got failed. The reason why test_batch-3 failed is that Nova is waiting for a "network-vif-plugged" event while for this VM, neutron sends a "network-changed" event to Nova, then nova failed.

However, when I checked neutron, I found some attribute of these two ports are different:

Successful port:
Request body: {'port': {'device_id': '201dee76-fab2-4ed5-9ca4-f599fc6a10d4', 'network_id': 'a0a294ed-812e-451d-bcc0-5879026560e2', 'admin_state_up': True, 'tenant_id': '0584825def4f48dc94d553c1ecf28921'}}

CheckRevisionNumberCommand(name=b72e161e-03ff-4df7-b1a1-b358a1c98f9c, resource={'id': 'b72e161e-03ff-4df7-b1a1-b358a1c98f9c', 'name': '', 'network_id': 'a0a294ed-812e-451d-bcc0-5879026560e2', 'tenant_id': '0584825def4f48dc94d553c1ecf28921', 'mac_address': 'fa:16:3e:d3:b3:c7', 'admin_state_up': True, 'status': 'ACTIVE', 'device_id': '201dee76-fab2-4ed5-9ca4-f599fc6a10d4', 'device_owner': 'compute:nova', 'standard_attr_id': 265663, 'fixed_ips': [{'subnet_id': '9cd5439f-cf67-46ca-b29c-703c406d0420', 'ip_address': '192.168.15.110'}], 'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'security_groups': ['782b406d-c521-43f6-91e9-671a8ba08a87'], 'description': '', 'binding:vnic_type': 'normal', 'binding:profile': {}, 'binding:host_id': 'compute-10', 'binding:vif_type': 'ovs', 'binding:vif_details': {'port_filter': True, 'connectivity': 'l2'}, 'port_security_enabled': True, 'tags': [], 'created_at': '2023-02-01T23:16:54Z', 'updated_at': '2023-02-01T23:16:56Z', 'revision_number': 4, 'project_id': '0584825def4f48dc94d553c1ecf28921', 'network': {'id': 'a0a294ed-812e-451d-bcc0-5879026560e2', 'name': 'rally-test-network', 'tenant_id': 'ba4d5e8e34e746f1a1d6d7cb712a4eb7', 'admin_state_up': True, 'mtu': 1500, 'status': 'ACTIVE', 'subnets': ['9cd5439f-cf67-46ca-b29c-703c406d0420'], 'standard_attr_id': 144871, 'shared': True, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'vlan_transparent': None, 'description': '', 'port_security_enabled': True, 'tags': [], 'created_at': '2023-01-04T00:31:33Z', 'updated_at': '2023-01-04T00:50:21Z', 'revision_number': 3, 'project_id': 'ba4d5e8e34e746f1a1d6d7cb712a4eb7', 'provider:network_type': 'geneve', 'provider:physical_network': None, 'provider:segmentation_id': 1450}}

Failed port:
neutron-server.log.1:2023-02-01 15:16:54.523 25 DEBUG neutron.api.v2.base [req-d40bc291-8d8f-43cb-aea2-fd4f54922114 5d9a7cccf26c44e1800cee064de79113 d9c1ae22d6a84b438d83c7563ded2415 - default default] Request body: {'port': {'device_id': 'b6b590cf-0423-4306-8e4a-51c5305affb7', 'device_owner': 'compute:nova', 'binding:host_id': 'compute-10', 'dns_name': 'test-batch-2'}} prepare_request_body /var/lib/kolla/venv/lib/python3.6/site-packages/neutron/api/v2/base.py:729

CheckRevisionNumberCommand(name=685917b8-3277-485c-bb12-0a4c6d1aa595, resource={'id': '685917b8-3277-485c-bb12-0a4c6d1aa595', 'name': '', 'network_id': 'a0a294ed-812e-451d-bcc0-5879026560e2', 'tenant_id': '0584825def4f48dc94d553c1ecf28921', 'mac_address': 'fa:16:3e:4e:ab:c1', 'admin_state_up': True, 'status': 'DOWN', 'device_id': 'b6b590cf-0423-4306-8e4a-51c5305affb7', 'device_owner': 'compute:nova', 'standard_attr_id': 265660, 'fixed_ips': [{'subnet_id': '9cd5439f-cf67-46ca-b29c-703c406d0420', 'ip_address': '192.168.15.90'}], 'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'security_groups': ['782b406d-c521-43f6-91e9-671a8ba08a87'], 'description': '', 'binding:vnic_type': 'normal', 'binding:profile': {}, 'binding:host_id': 'compute-10', 'binding:vif_type': 'unbound', 'binding:vif_details': {}, 'port_security_enabled': True, 'tags': [], 'created_at': '2023-02-01T23:16:54Z', 'updated_at': '2023-02-01T23:16:54Z', 'revision_number': 2, 'project_id': '0584825def4f48dc94d553c1ecf28921', 'network': {'id': 'a0a294ed-812e-451d-bcc0-5879026560e2', 'name': 'rally-test-network', 'tenant_id': 'ba4d5e8e34e746f1a1d6d7cb712a4eb7', 'admin_state_up': True, 'mtu': 1500, 'status': 'ACTIVE', 'subnets': ['9cd5439f-cf67-46ca-b29c-703c406d0420'], 'standard_attr_id': 144871, 'shared': True, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'vlan_transparent': None, 'description': '', 'port_security_enabled': True, 'tags': [], 'created_at': '2023-01-04T00:31:33Z', 'updated_at': '2023-01-04T00:50:21Z', 'revision_number': 3, 'project_id': 'ba4d5e8e34e746f1a1d6d7cb712a4eb7', 'provider:network_type': 'geneve', 'provider:physical_network': None, 'provider:segmentation_id': 1450}}

This is easy to reproduce, especially when creating a batch of instances.

bryan (bryansoong21)
description: updated
Revision history for this message
bryan (bryansoong21) wrote :

This happens on OpenStack Xena.

Revision history for this message
Rodolfo Alonso (rodolfo-alonso-hernandez) wrote :

Hi Bryan:

Can you share the Nova and Neutron logs? Is it because there is a timeout when processing the ports? Does it happens with batches of less VMs?

Regards.

Revision history for this message
bryan (bryansoong21) wrote (last edit ):

Hello Rodolfo,

Thank you so much for your quick response. The error is "eventlet.timeout.Timeout: 300 seconds".

For the first time, this happened when we are creating batches of VM, about 120+. But sometimes when I tried about 10 or fewer instances, this also happens occasionally.

Here are the instance action list and logs to make it easier for you to look into this problem.

Successful VM
# nova instance-action-list 201dee76-fab2-4ed5-9ca4-f599fc6a10d4
nova CLI is deprecated and will be a removed in a future release
+--------+------------------------------------------+---------+----------------------------+----------------------------+
| Action | Request_ID | Message | Start_Time | Updated_At |
+--------+------------------------------------------+---------+----------------------------+----------------------------+
| create | req-fc7f640c-eb26-4bb5-9688-f4ac494aa75a | - | 2023-02-01T23:16:51.000000 | 2023-02-01T23:16:56.000000 |
+--------+------------------------------------------+---------+----------------------------+----------------------------+

Failed VM:
# nova instance-action-list b6b590cf-0423-4306-8e4a-51c5305affb7
nova CLI is deprecated and will be a removed in a future release
+--------+------------------------------------------+---------+----------------------------+----------------------------+
| Action | Request_ID | Message | Start_Time | Updated_At |
+--------+------------------------------------------+---------+----------------------------+----------------------------+
| create | req-fc7f640c-eb26-4bb5-9688-f4ac494aa75a | - | 2023-02-01T23:16:51.000000 | 2023-02-01T23:21:57.000000 |
+--------+------------------------------------------+---------+----------------------------+----------------------------+

If anything is missing, please let me know.

Regards.

Miro Tomaska (mtomaska)
Changed in neutron:
assignee: nobody → Miro Tomaska (mtomaska)
Revision history for this message
Miro Tomaska (mtomaska) wrote :

Hi Brayn,

Can you clarify "batch create". Are you using HEAT or a shell script to create these VMs?
Can you share the HOT or a script you are using which reproduces the issue?
Thanks!

Revision history for this message
bryan (bryansoong21) wrote (last edit ):

Hello Miro,

We have meet this issue in several cases, and here are the steps:

1. using rally to do batch test, here is our configure:
  NovaServers.boot_and_delete_server:
  - args:
      flavor:
        name: "{{flavor_name}}"
      image:
        name: "{{image_name}}"
      force_delete: false
      nics:
        - net-name: "{{network_name}}"
    runner:
      type: constant
      times: 250
      concurrency: 100
    context:
      users:
        tenants: 50
        users_per_tenant: 5
    sla:
      failure_rate:
        max: 0

  For rally test, we changed "concurrency" to different values, from 50 to 200. The failed instance number vairied from 1 to around 30, about 10 in most cases.

2. Using command line:
openstack server create test_batch --image 095c2bab-905e-448b-a0ef-3aabd1da16d2 --nic net-id=a0a294ed-812e-451d-bcc0-5879026560e2 --flavor b16175f3-213b-4771-a857-35d8ff251f5e --min 10 --max 10 --availability-zone nova:compute-10

The logs and error messages attached came from this command line case. This happens on each compute node, and if the logs are not enough, I can reproduce this again.

Thanks very much for your time.

Regards.

Revision history for this message
Miro Tomaska (mtomaska) wrote :

Thanks Bryan, would it be possible for you to share just a slice of logs when the issue occured. The attached logs are totaling ~850MB :-), majority can probably be ignored. Or at least a timestamp to pay attention to. I noticed some logs you posted earlier have timestamp 2023-02-01 15:16:54.523

Revision history for this message
bryan (bryansoong21) wrote (last edit ):
  • log.7z Edit (727.5 KiB, application/x-7z-compressed)

Hello Miro, thank you for your reply.

I have reproduced this problem and gathered new logs, you can refer to the attachment for detailed information, and the attachment logs are about 24MB in total.

"out.txt" file is the VM list together with "nova instance-action-list" and "nova interface-list" command results. Because I am in (GMT-8), so the log time (around 2023-02-13 13:00) should be 8 hours earlier than the instance-action time (around 2023-02-13T21:00).

Here is my creating command:
openstack server create reproduce_vm --flavor b16175f3-213b-4771-a857-35d8ff251f5e --network a0a294ed-812e-451d-bcc0-5879026560e2 --image 095c2bab-905e-448b-a0ef-3aabd1da16d2 --min 10 --max 10 --availability-zone nova:compute-10

Thank you again for your help, regards.

Revision history for this message
bryan (bryansoong21) wrote (last edit ):
Download full text (3.6 KiB)

Hello Miro,

For a success VM (reproduce_vm-1, vm id: ca1b07a4-c0f3-4ec8-8997-d54b18a5cd46, port id: fc077e33-90d8-4500-b16b-ed7a509fe73d), then I search neutron log and found the following events:

os-control-3:
[root@os-control-3 home]# zgrep fc077e33-90d8-4500-b16b-ed7a509fe73d * | grep -i "Sending events"
neutron-server_os-control-3.log:2023-02-13 13:00:09.001 26 DEBUG neutron.notifiers.nova [-] Sending events: [{'name': 'network-changed', 'server_uuid': 'ca1b07a4-c0f3-4ec8-8997-d54b18a5cd46', 'tag': 'fc077e33-90d8-4500-b16b-ed7a509fe73d'}] send_events /var/lib/kolla/venv/lib/python3.6/site-packages/neutron/notifiers/nova.py:271

os-control-2:
[root@os-control-2 home]# zgrep fc077e33-90d8-4500-b16b-ed7a509fe73d * | grep -i "Sending events"
[root@os-control-2 home]#

os-control-1:
[root@os-control-1 home]# zgrep fc077e33-90d8-4500-b16b-ed7a509fe73d * | grep -i "Sending events"
neutron-server_os-control-1.log:2023-02-13 13:00:10.395 26 DEBUG neutron.notifiers.nova [-] Sending events: [{'server_uuid': 'ca1b07a4-c0f3-4ec8-8997-d54b18a5cd46', 'name': 'network-vif-plugged', 'status': 'completed', 'tag': 'fc077e33-90d8-4500-b16b-ed7a509fe73d'}] send_events /var/lib/kolla/venv/lib/python3.6/site-packages/neutron/notifiers/nova.py:271
neutron-server_os-control-1.log:2023-02-13 13:00:12.437 26 DEBUG neutron.notifiers.nova [-] Sending events: [{'server_uuid': 'ca1b07a4-c0f3-4ec8-8997-d54b18a5cd46', 'name': 'network-vif-plugged', 'status': 'completed', 'tag': 'fc077e33-90d8-4500-b16b-ed7a509fe73d'}] send_events /var/lib/kolla/venv/lib/python3.6/site-packages/neutron/notifiers/nova.py:271
[root@os-control-1 home]#

For a failed VM (reproduce_vm-3, vm id: 5439157d-2c5d-4a14-8596-5fe0ba92501c, port id: f05a4032-4ee2-4803-9c73-31dfa7282014), then I search neutron log but could not find 'network-vif-plugged' events, here are the logs:

os-control-1:
[root@os-control-1 home]# zgrep f05a4032-4ee2-4803-9c73-31dfa7282014 * | grep -i "Sending events"
[root@os-control-1 home]#

os-control-2:
[root@os-control-2 home]# zgrep f05a4032-4ee2-4803-9c73-31dfa7282014 * | grep -i "Sending events"
neutron-server_os-control-2.log:2023-02-13 13:00:09.565 23 DEBUG neutron.notifiers.nova [-] Sending events: [{'name': 'network-changed', 'server_uuid': '5439157d-2c5d-4a14-8596-5fe0ba92501c', 'tag': 'f05a4032-4ee2-4803-9c73-31dfa7282014'}] send_events /var/lib/kolla/venv/lib/python3.6/site-packages/neutron/notifiers/nova.py:271

os-control-3:
[root@os-control-3 home]# zgrep f05a4032-4ee2-4803-9c73-31dfa7282014 * | grep -i "Sending events"
neutron-server_os-control-3.log:2023-02-13 13:05:12.444 24 DEBUG neutron.notifiers.nova [-] Sending events: [{'server_uuid': '5439157d-2c5d-4a14-8596-5fe0ba92501c', 'name': 'network-vif-deleted', 'tag': 'f05a4032-4ee2-4803-9c73-31dfa7282014'}] send_events /var/lib/kolla/venv/lib/python3.6/site-packages/neutron/notifiers/nova.py:271

For the success port, the following logs are also presented:
[root@os-control-1 home]# zgrep fc077e33-90d8-4500-b16b-ed7a509fe73d /var/log/kolla/neutron/* | grep -i Provisioning | grep "completed"
/var/log/kolla/neutron/neutron-server.log:2023-02-13 13:00:10.324 26 DEBUG neutron.db.provisioning_blocks [req-2f760459...

Read more...

Revision history for this message
bryan (bryansoong21) wrote (last edit ):

Hello Miro,

I reproduced this problem and checked the ovn-northd, and here is what I have found, supposing we are creating a VM with IP address 192.168.10.25.

1. If ovn-northd has one residual port with the same IP address (192.168.10.25), but this IP is still available in neutron, then this port will fail.
2. Then I write a script to clean the residual port and then recreate it using the same IP, then this port can be created successfully.

So the inconsistency between neutron and ovn-northd caused this problem. Creating a batch of ports will cause some pressure on ovn-northd since this might require more resources than a single port. I guess in our case, the leader election might cause this problem. We are using a 3-node cluster (node-1, node-2 and node-3), suppose now the leader is node-1, because of the batching pressure, then the main node has switched to node-2 due to slow response. Perhaps node-1 has received the request but handling it slowly, then this port has left in ovn-northd.

Regards.

Revision history for this message
Miro Tomaska (mtomaska) wrote :

Hi Bryan, thanks for the info. Yes this could be some race condition between Neutron and OVN DBs. I am curious if running neutron_ovn_db_sync_util.py [1] have the same effect as your workaround script. Running neutron_db_sync_util is also just a hack so we will need more proper solution. Sorry I have not gotten to the root cause of this problem yet.

[1]https://github.com/openstack/neutron/blob/master/neutron/cmd/ovn/neutron_ovn_db_sync_util.py#L155

Revision history for this message
bryan (bryansoong21) wrote :

Hi Miro, thank you so much for your consistency and time on this problem. Take your time since I could use the script to find the ports mismatch between neutron and ovn-northd. I will delete the residual ports on ovn-northd and this is a workaround for this issue.

Thank you again for your work. This issue is really tricky, please let me know if you get any further findings.

Revision history for this message
Dongcan Ye (hellochosen) wrote :
Download full text (7.0 KiB)

Hi, any update for this bug?

I can reproduce in OpenStack Zed version, (OVS version 2.17.7/OVN version 22.06.1)
this bug can easily reproduce.

Using the command like:
# openstack server create test_vm --flavor XXXXXX --network XXXXXX --image XXXXXX --min 10 --max 10 --availability-zone nova:XXXXXX

There maybe at least 2 or 3 instances created failed because of vif plugging timeout error.

VM port UUID: 6d5eccfa-069e-4058-a1c8-87bec9c1c280

Here is some logs info:

1. ovsdb server log:
record 554: 2023-05-19 10:48:42.399
  table Interface insert row "tap6d5eccfa-06" (8d7bdf97):
  table Open_vSwitch row 1a4db534 (1a4db534) diff:
  table Bridge row "br-int" (ee9ab5d4) diff:
  table Port insert row "tap6d5eccfa-06" (a4be79ec):

record 555: 2023-05-19 10:48:42.411
  table Interface row "tap6d5eccfa-06" (8d7bdf97) diff:
  table Open_vSwitch row 1a4db534 (1a4db534) diff:

record 556: 2023-05-19 10:48:42.909
  table Interface row "tap6d5eccfa-06" (8d7bdf97) diff:

record 557: 2023-05-19 10:48:42.941 "ovs-vsctl (invoked by init (pid 0)): ovs-vsctl del-port tap6d5eccfa-06"
  table Open_vSwitch row 1a4db534 (1a4db534) diff:
  table Interface row "tap6d5eccfa-06" (8d7bdf97) diff:
    delete row
  table Bridge row "br-int" (ee9ab5d4) diff:
  table Port row "tap6d5eccfa-06" (a4be79ec) diff:
    delete row

2. ovs-vswitchd log:
2023-05-19T10:48:42.400Z|13247|jsonrpc|DBG|unix:/run/openvswitch/db.sock: received notification, method="update3", params=[["monid","Open_vSwitch"],"00000000-0000-0000-0000-000000000000",{"Open_vSwitch":{"1a4db534-3133-43e0-922e-dd7d5d76a802":{"modify":{"next_cfg":135}}},"Interface":{"8d7bdf97-1f91-4ef3-9b27-b2a9e7c362e2":{"insert":{"name":"tap6d5eccfa-06"}}},"Port":{"a4be79ec-df95-495b-87e1-fc373582f647":{"insert":{"name":"tap6d5eccfa-06","interfaces":["uuid","8d7bdf97-1f91-4ef3-9b27-b2a9e7c362e2"]}}},"Bridge":{"ee9ab5d4-b72b-479b-87cd-29a19d5540a8":{"modify":{"ports":["uuid","a4be79ec-df95-495b-87e1-fc373582f647"]}}}}]
2023-05-19T10:48:42.410Z|13248|bridge|WARN|could not open network device tap6d5eccfa-06 (No such device)
2023-05-19T10:48:42.411Z|13249|jsonrpc|DBG|unix:/run/openvswitch/db.sock: send request, method="transact", params=["Open_vSwitch",{"row":{"cur_cfg":135},"where":[["_uuid","==",["uuid","1a4db534-3133-43e0-922e-dd7d5d76a802"]]],"table":"Open_vSwitch","op":"update"},{"row":{"error":"could not open network device tap6d5eccfa-06 (No such device)","ofport":-1},"where":[["_uuid","==",["uuid","8d7bdf97-1f91-4ef3-9b27-b2a9e7c362e2"]]],"table":"Interface","op":"update"},{"op":"assert","lock":"ovs_vswitchd"}], id=4787
2023-05-19T10:48:42.412Z|13250|poll_loop|DBG|wakeup due to [POLLIN] on fd 13 (<->/run/openvswitch/db.sock) at ../lib/stream-fd.c:157 (4% CPU usage)
2023-05-19T10:48:42.412Z|13251|jsonrpc|DBG|unix:/run/openvswitch/db.sock: received notification, method="update3", params=[["monid","Open_vSwitch"],"00000000-0000-0000-0000-000000000000",{"Interface":{"8d7bdf97-1f91-4ef3-9b27-b2a9e7c362e2":{"modify":{"error":"could not open network device tap6d5eccfa-06 (No such device)","ofport":-1}}},"Open_vSwitch":{"1a4db534-3133-43e0-922e-dd7d5d76a802":{"modify":{"cur_cfg":135}}}}]

2023-05-19T10:48:42.907Z|13260|vconn|DBG|u...

Read more...

Revision history for this message
Dongcan Ye (hellochosen) wrote :

By the way, it has same problem in ML2/OVS driver.(ovs version 2.17.7)

Revision history for this message
zhaobo (zhaobo6) wrote :

Hi Dongcan,

Any idea about this one? is that possile that you could tell us the deployment of your ovn when you hit this issue. ;-)

That would be comparision towards above cases, I think.

Thanks

Revision history for this message
Dongcan Ye (hellochosen) wrote :

Hi, zhaobo. We use OpenStack Zed, both OVS, OVN and Neutron components run in container. ovn-sb and ovn-nb run as a raft cluster.

We still have no solution about this problem, it seems related to OVS. I filed a bug[1] a few months ago.

[1] https://bugs.launchpad.net/neutron/+bug/2020328

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.