ha router schedule to dvr agent in compute node
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
neutron |
Expired
|
Medium
|
Unassigned |
Bug Description
I use my conpany's environment to test the neutron DVR router.
At first, the environment use 2 network node provide L3-ha-router, in neutron.conf :
l3_ha = True
max_l3_
min_l3_
then I change the neutron.conf to:
l3_ha = False
router_distributed = True
max_l3_
min_l3_
and run dvr mode l3-agent on compute nodes, now the strange things happened, All ha-router
bind to this compute node.
If i create a new ha-router ,and use "neutron l3-agent-
root@controller:~# neutron l3-agent-
+------
| id | host | admin_state_up | alive | ha_state |
+------
| 0f3f65bd-
| b174f741-
+------
root@controller:~# neutron l3-agent-
+------
| id | host | admin_state_up | alive | ha_state |
+------
| 95f0c274-
| 0f3f65bd-
| b174f741-
+------
It will first bind to network1 and network2,then bind to compute3.
I guess the reason is when dvr mode l3-agent start sync_router , neutron bind the ha-router to compute3
tags: | added: kilo-backport-potential l3-dvr-backlog l3-ha liberty-backport-potential |
Are you sure you config dvr right? Because I can see that current code [1] has such check to prevent your problem. What is your agent_mode in compute node's l3_agent?
[1] https:/ /github. com/openstack/ neutron/ blob/bcd383f38c ded0ef87ec8f042 031814ce362a5f0 /neutron/ db/l3_agentsche dulers_ db.py#L523