2017-03-07 10:09:31 |
wangyalei |
bug |
|
|
added bug |
2017-03-07 10:09:37 |
wangyalei |
nova: assignee |
|
wangyalei (yalei) |
|
2017-03-07 10:10:16 |
wangyalei |
summary |
nova-compute will try to re-plug the vif even if it exists |
nova-compute will try to re-plug the vif even if it exists for vhostuser port. |
|
2017-03-07 10:20:26 |
wangyalei |
description |
Description
===========
In mitaka version, deploy neutron with ovs-dpdk.
If we stop ovs-agent, then re-start the nova-compute,the vm in the host will get network connection failed.
Steps to reproduce
==================
deploy mitaka. with neutron, enabled ovs-dpdk, choose one compute node, where vm has network connection.
run this in host,
1. #systemctl stop neutron-openvswitch-agent.service
2. #systemctl restart openstack-nova-compute.service
then ping $VM_IN_THIS_HOST
Expected result
===============
ping $VM_IN_THIS_HOST would would success
Actual result
=============
ping $VM_IN_THIS_HOST failed.
Environment
===========
Centos7 ovs2.5.1 dpdk 2.2.0
Reason:
after some digging, I found that nova-compute will try to plug the vif every time when it booting.
Specially for vhostuser port, nova-compute will not check whether it exists as legacy ovs,and it will re-plug the port with vsctl args like "--if-exists del-port vhuxxxx".
(refer https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/libvirt/vif.py#L679-L683)
after recreate the ovs vhostuser port, it will not get the right vlan tag which set from ovs agent.
In the test environment, after restart the ovs agent, the agent will set a proper vlan id for the port. and the network connection will be resumed.
Not sure it's a bug or config issue, do I miss something? |
Description
===========
In mitaka version, deploy neutron with ovs-dpdk.
If we stop ovs-agent, then re-start the nova-compute,the vm in the host will get network connection failed.
Steps to reproduce
==================
deploy mitaka. with neutron, enabled ovs-dpdk, choose one compute node, where vm has network connection.
run this in host,
1. #systemctl stop neutron-openvswitch-agent.service
2. #systemctl restart openstack-nova-compute.service
then ping $VM_IN_THIS_HOST
Expected result
===============
ping $VM_IN_THIS_HOST would would success
Actual result
=============
ping $VM_IN_THIS_HOST failed.
Environment
===========
Centos7
ovs2.5.1
dpdk 2.2.0
openstack-nova-compute-13.1.1-1
Reason:
after some digging, I found that nova-compute will try to plug the vif every time when it booting.
Specially for vhostuser port, nova-compute will not check whether it exists as legacy ovs,and it will re-plug the port with vsctl args like "--if-exists del-port vhuxxxx".
(refer https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/libvirt/vif.py#L679-L683)
after recreate the ovs vhostuser port, it will not get the right vlan tag which set from ovs agent.
In the test environment, after restart the ovs agent, the agent will set a proper vlan id for the port. and the network connection will be resumed.
Not sure it's a bug or config issue, do I miss something? |
|
2017-03-07 10:32:44 |
wangyalei |
description |
Description
===========
In mitaka version, deploy neutron with ovs-dpdk.
If we stop ovs-agent, then re-start the nova-compute,the vm in the host will get network connection failed.
Steps to reproduce
==================
deploy mitaka. with neutron, enabled ovs-dpdk, choose one compute node, where vm has network connection.
run this in host,
1. #systemctl stop neutron-openvswitch-agent.service
2. #systemctl restart openstack-nova-compute.service
then ping $VM_IN_THIS_HOST
Expected result
===============
ping $VM_IN_THIS_HOST would would success
Actual result
=============
ping $VM_IN_THIS_HOST failed.
Environment
===========
Centos7
ovs2.5.1
dpdk 2.2.0
openstack-nova-compute-13.1.1-1
Reason:
after some digging, I found that nova-compute will try to plug the vif every time when it booting.
Specially for vhostuser port, nova-compute will not check whether it exists as legacy ovs,and it will re-plug the port with vsctl args like "--if-exists del-port vhuxxxx".
(refer https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/libvirt/vif.py#L679-L683)
after recreate the ovs vhostuser port, it will not get the right vlan tag which set from ovs agent.
In the test environment, after restart the ovs agent, the agent will set a proper vlan id for the port. and the network connection will be resumed.
Not sure it's a bug or config issue, do I miss something? |
Description
===========
In mitaka version, deploy neutron with ovs-dpdk.
If we stop ovs-agent, then re-start the nova-compute,the vm in the host will get network connection failed.
Steps to reproduce
==================
deploy mitaka. with neutron, enabled ovs-dpdk, choose one compute node, where vm has network connection.
run this in host,
1. #systemctl stop neutron-openvswitch-agent.service
2. #systemctl restart openstack-nova-compute.service
then ping $VM_IN_THIS_HOST
Expected result
===============
ping $VM_IN_THIS_HOST would would success
Actual result
=============
ping $VM_IN_THIS_HOST failed.
Environment
===========
Centos7
ovs2.5.1
dpdk 2.2.0
openstack-nova-compute-13.1.1-1
Reason:
after some digging, I found that nova-compute will try to plug the vif every time when it booting.
Specially for vhostuser port, nova-compute will not check whether it exists as legacy ovs,and it will re-plug the port with vsctl args like "--if-exists del-port vhuxxxx".
(refer https://github.com/openstack/nova/blob/stable/mitaka/nova/virt/libvirt/vif.py#L679-L683)
after recreate the ovs vhostuser port, it will not get the right vlan tag which set from ovs agent.
In the test environment, after restart the ovs agent, the agent will set a proper vlan id for the port. and the network connection will be resumed.
Not sure it's a bug or config issue, do I miss something?
there is also fp_plug type for vhostuser port, how could we specify it? |
|
2017-03-21 16:06:18 |
Maciej Szankin |
nova: status |
New |
In Progress |
|
2017-06-27 15:56:45 |
Sean Dague |
tags |
|
openstack-version.mitaka |
|
2017-06-27 19:30:52 |
Sean Dague |
nova: status |
In Progress |
New |
|
2017-06-27 19:30:55 |
Sean Dague |
nova: assignee |
wangyalei (yalei) |
|
|
2017-07-28 12:27:14 |
Sean Dague |
nova: status |
New |
Incomplete |
|
2017-07-28 12:28:54 |
Sean Dague |
nova: status |
Incomplete |
Opinion |
|
2018-03-05 14:25:00 |
Stephen Finucane |
nova: status |
Opinion |
Confirmed |
|
2018-03-05 14:25:06 |
Stephen Finucane |
nova: importance |
Undecided |
High |
|
2018-03-21 14:58:14 |
Matt Riedemann |
bug task added |
|
os-vif |
|
2018-03-21 14:58:31 |
Matt Riedemann |
bug task deleted |
nova |
|
|
2018-03-21 14:58:37 |
Matt Riedemann |
os-vif: status |
New |
Fix Committed |
|
2018-03-21 14:59:00 |
Matt Riedemann |
os-vif: importance |
Undecided |
High |
|
2018-03-21 14:59:05 |
Matt Riedemann |
os-vif: assignee |
|
sahid (sahid-ferdjaoui) |
|
2018-03-21 15:02:21 |
Matt Riedemann |
nominated for series |
|
os-vif/pike |
|
2018-03-21 15:02:21 |
Matt Riedemann |
bug task added |
|
os-vif/pike |
|
2018-03-21 15:02:21 |
Matt Riedemann |
nominated for series |
|
os-vif/queens |
|
2018-03-21 15:02:21 |
Matt Riedemann |
bug task added |
|
os-vif/queens |
|
2018-03-21 15:02:27 |
Matt Riedemann |
os-vif/pike: status |
New |
In Progress |
|
2018-03-21 15:02:28 |
Matt Riedemann |
os-vif/queens: status |
New |
In Progress |
|
2018-03-21 15:02:30 |
Matt Riedemann |
os-vif/pike: importance |
Undecided |
High |
|
2018-03-21 15:02:32 |
Matt Riedemann |
os-vif/queens: importance |
Undecided |
High |
|
2018-03-21 15:02:36 |
Matt Riedemann |
os-vif/pike: assignee |
|
sahid (sahid-ferdjaoui) |
|
2018-03-21 15:02:42 |
Matt Riedemann |
os-vif/queens: assignee |
|
sahid (sahid-ferdjaoui) |
|
2018-03-21 15:22:52 |
OpenStack Infra |
os-vif/queens: status |
In Progress |
Fix Committed |
|
2018-03-21 19:44:32 |
OpenStack Infra |
os-vif/pike: assignee |
sahid (sahid-ferdjaoui) |
Matt Riedemann (mriedem) |
|
2018-04-03 09:10:54 |
OpenStack Infra |
os-vif/pike: status |
In Progress |
Fix Committed |
|