After many logs rechecking, I think this is because the dvr and dvr_snat are both compute node: https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate.sh#L843
In the `Bug Description` LOG where the test vm was scheduled to the dvr_snat aka subnode-2: http://logs.openstack.org/63/555263/3/check/neutron-tempest-plugin-dvr-multinode-scenario/e7c012f/logs/subnode-2/screen-n-cpu.txt.gz#_Mar_23_09_14_47_604613
That's why the tc rules were applied in "snat-namespace": http://logs.openstack.org/63/555263/3/check/neutron-tempest-plugin-dvr-multinode-scenario/e7c012f/logs/subnode-2/screen-q-l3.txt.gz#_Mar_23_09_20_07_568567
DVR edge (or DVR edge ha) router will set the rules to snat-namespace: https://github.com/openstack/neutron/blob/master/neutron/agent/l3/extensions/fip_qos.py#L223
I think, for dvr fip qos in such mixed deployment scenarios, we can install the rules again in the qrouter-namespce for dvr_snat node.
After many logs rechecking, I think this is because the dvr and dvr_snat are both compute node: /github. com/openstack- infra/devstack- gate/blob/ master/ devstack- vm-gate. sh#L843
https:/
In the `Bug Description` LOG where the test vm was scheduled to the dvr_snat aka subnode-2: logs.openstack. org/63/ 555263/ 3/check/ neutron- tempest- plugin- dvr-multinode- scenario/ e7c012f/ logs/subnode- 2/screen- n-cpu.txt. gz#_Mar_ 23_09_14_ 47_604613
http://
That's why the tc rules were applied in "snat-namespace": logs.openstack. org/63/ 555263/ 3/check/ neutron- tempest- plugin- dvr-multinode- scenario/ e7c012f/ logs/subnode- 2/screen- q-l3.txt. gz#_Mar_ 23_09_20_ 07_568567
http://
DVR edge (or DVR edge ha) router will set the rules to snat-namespace: /github. com/openstack/ neutron/ blob/master/ neutron/ agent/l3/ extensions/ fip_qos. py#L223
https:/
I think, for dvr fip qos in such mixed deployment scenarios, we can install the rules again in the qrouter-namespce for dvr_snat node.