legacy router namespaces absence or error run exec

Bug #1823023 reported by YG Kumar
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Fix Released
Undecided
Unassigned
neutron
Incomplete
Undecided
Unassigned

Bug Description

Hi,

We have a rocky OSA setup 18.1.4 version from git. Whenever we create a router it is getting created and showing a compute node as host with the command "l3-agent-list-hosting-router" .

But when we log into the compute node and check, there is no namespace for that router and sometimes
even though that namespace is created, when we do a ip netns exec qrouter-xxxxxxxxx ip a
that throws an error "Unable to find router with name or id '7ec2fa3057374a1584418124d5b879ca':

Also when we do a ip netns on the computes we see this :
---------
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
-------------

The neutron.conf file on the computes:
----------
# Ansible managed
# General, applies to all host groups
[DEFAULT]
debug = True
# Domain to use for building hostnames
dns_domain = vbg.example.cloud
## Rpc all
executor_thread_pool_size = 64
fatal_deprecations = False
l3_ha = False
log_file = /var/log/neutron/neutron.log
rpc_response_timeout = 60
transport_url = rabbit://neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.236.201:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.27:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.39:5671//neutron?ssl=1
# Disable stderr logging
use_stderr = False

# Agent
[agent]
polling_interval = 5
report_interval = 60
root_helper = sudo /openstack/venvs/neutron-18.1.4/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

# Concurrency (locking mechanisms)
[oslo_concurrency]
lock_path = /var/lock/neutron

# Notifications
[oslo_messaging_notifications]
driver = messagingv2
notification_topics = notifications,notifications_designate
transport_url = rabbit://neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.236.201:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.27:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.39:5671//neutron?ssl=1

# Messaging
[oslo_messaging_rabbit]
rpc_conn_pool_size = 30
ssl = True
------------------

l3_agent.ini file
------------
# Ansible managed

# General
[DEFAULT]
debug = True

# Drivers
interface_driver = openvswitch

agent_mode = legacy

# Conventional failover
allow_automatic_l3agent_failover = True

# HA failover
ha_confs_path = /var/lib/neutron/ha_confs
ha_vrrp_advert_int = 2
ha_vrrp_auth_password = ec86ebf62a85f387569ed0251dc7c8dd9e484949ba320a6ee6bf483758a318
ha_vrrp_auth_type = PASS

# Metadata
enable_metadata_proxy = True

# L3 plugins
------------------------

Please help us with this issue.

Thanks
Y.G Kumar

LIU Yulong (dragon889)
summary: - l3 namespaces
+ legacy router namespaces absence or error run exec
Revision history for this message
YG Kumar (ygk-kmr) wrote :

Hi,

I observed this when I try to create a network namespace manually on the compute nodes. The below command is hanging and never completes:
--------------
ip netns add test

Revision history for this message
Bernard Cafarelli (bcafarel) wrote :

Simple "ip netns" commands failing sounds like this is out of neutron scope, adding OSA directly and marking incomplete on our side for now

Changed in openstack-ansible:
status: New → Incomplete
status: Incomplete → New
Changed in neutron:
status: New → Incomplete
Revision history for this message
Dmitriy Rabotyagov (noonedeadpunk) wrote :

it's weird that routers were scheduled to compute nodes - we usually suggested using either standalone net nodes or controllers as destination for L3 agents

Also we haven't seen issues like that really for a while now. Since Rocky is not supported and reach it's End Of Life, at this point there is no way to help you out with the issue as of today.
But I suppose that it was fixed, as things work pretty much stable today.

Please feel free to bump this bug report or open a new one in case of simmilar issues in the future.

Changed in openstack-ansible:
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.