Activity log for bug #1486882

Date Who What changed Old value New value Message
2015-08-20 08:02:52 Dong Liu bug added bug
2015-08-20 08:09:01 Dong Liu neutron: assignee Dong Liu (liudong78)
2015-08-21 03:32:42 Kevin Benton neutron: status New Confirmed
2015-08-25 03:26:28 Dong Liu description In some scenes using neutron, there are more than one backends of ML2, such as one is openvswitch and the other is an sdn controller. The ports in the same network, they may be made up of different ML2 backends. For openvswitch backends, the tunneling ip of a port is the same as ovs-agent host's ip. But for an sdn controller backends, there is no l2-agent, the tunneling ip of a port can not get from host configuration. So I think, we need extend a new ml2 port attribute to record the tunnel connection info about ports. Maybe we can named it "binding:tunnel_connection". In currently implement, we get the tunneling ip of a port in this way, port -> binding:host -> agent -> agent configurations -> tunneling_ip. By using this extension, we could get a tunneling ip in a simple way, port -> binding:tunnel_connection -> tunneling_ip. Proposed Change: * add an extension attribute of port, named "binding:tunnel_connection" * add a field in ports table * a little change in the flow of create/update port, maybe in openvswitch_ml2_driver and l2pop_driver * maybe need add some policy rules In some scenes using neutron, there are more than one backends of ML2, maybe two backends, one is openvswitch and the other is an mechanism driver that manage a lot of TOR switch. The ports in the same network, they may be made up of different ML2 backends. For openvswitch backends, the tunneling ip of a port is the same as ovs-agent host's ip. But for another kind of backends, there is no l2-agent, the tunneling ip of a port can not get from host configuration. So I think, we need to extend a new ml2 port attribute to record this tunnel connection info about ports. Maybe we can named it "binding:tunnel_connection". Another benefit of using this extension: In currently implement, for openvswitch backends, we get the tunneling ip of a port in this way: port -> binding:host -> agent -> agent configurations -> tunneling_ip. But by using this extension, we could get a tunneling ip in a simple way: port -> binding:tunnel_connection -> tunneling_ip. Proposed Change: * In API, add an extension attribute of ports, named "binding:tunnel_connection" * In DB, add a field in ports table * a little change in the flow of create/update port, maybe in openvswitch_ml2_driver and l2pop_driver * maybe need add some policy rules
2015-10-08 10:20:39 Tapio Tallgren bug added subscriber Tapio Tallgren
2015-11-11 21:27:18 Armando Migliaccio neutron: importance Undecided Wishlist
2015-11-30 07:36:22 shihanzhang description In some scenes using neutron, there are more than one backends of ML2, maybe two backends, one is openvswitch and the other is an mechanism driver that manage a lot of TOR switch. The ports in the same network, they may be made up of different ML2 backends. For openvswitch backends, the tunneling ip of a port is the same as ovs-agent host's ip. But for another kind of backends, there is no l2-agent, the tunneling ip of a port can not get from host configuration. So I think, we need to extend a new ml2 port attribute to record this tunnel connection info about ports. Maybe we can named it "binding:tunnel_connection". Another benefit of using this extension: In currently implement, for openvswitch backends, we get the tunneling ip of a port in this way: port -> binding:host -> agent -> agent configurations -> tunneling_ip. But by using this extension, we could get a tunneling ip in a simple way: port -> binding:tunnel_connection -> tunneling_ip. Proposed Change: * In API, add an extension attribute of ports, named "binding:tunnel_connection" * In DB, add a field in ports table * a little change in the flow of create/update port, maybe in openvswitch_ml2_driver and l2pop_driver * maybe need add some policy rules In some scenes using neutron, there are more than one backends of ML2, maybe two backends, one is openvswitch and the other is an mechanism driver that manage a lot of TOR switch. The ports in the same network, they may be made up of different ML2 backends. For openvswitch backends, the tunneling ip of a port is the same as ovs-agent host's ip. But for another kind of backends, there is no l2-agent, the tunneling ip of a port can not get from host configuration. So I think, we need to extend a new ml2 port attribute to record this tunnel connection info about ports. Maybe we can named it "binding:tunnel_connection". Another benefit of using this extension: In currently implement, for openvswitch backends, we get the tunneling ip of a port in this way: port -> binding:host -> agent -> agent configurations -> tunneling_ip. But by using this extension, we could get a tunneling ip in a simple way: port -> binding:tunnel_connection -> tunneling_ip.
2015-12-15 01:21:48 Armando Migliaccio neutron: status Confirmed Won't Fix
2015-12-15 06:58:32 shihanzhang summary ml2 port cross backends extension Tunnel accross different backends
2015-12-15 06:59:49 shihanzhang description In some scenes using neutron, there are more than one backends of ML2, maybe two backends, one is openvswitch and the other is an mechanism driver that manage a lot of TOR switch. The ports in the same network, they may be made up of different ML2 backends. For openvswitch backends, the tunneling ip of a port is the same as ovs-agent host's ip. But for another kind of backends, there is no l2-agent, the tunneling ip of a port can not get from host configuration. So I think, we need to extend a new ml2 port attribute to record this tunnel connection info about ports. Maybe we can named it "binding:tunnel_connection". Another benefit of using this extension: In currently implement, for openvswitch backends, we get the tunneling ip of a port in this way: port -> binding:host -> agent -> agent configurations -> tunneling_ip. But by using this extension, we could get a tunneling ip in a simple way: port -> binding:tunnel_connection -> tunneling_ip. For agent-based tunnel backends like ovs/linux bridge, VMs sending traffic on VXLAN networks have to derive a lot of information (agent IP address where VM is and VXLAN port that agent will be listening on), now l2pop driver gets these information from 'port -> binding:host -> agent -> agent configurations -> tunneling_ip', it mandate all agents setting the same VXLAN port in config file, while it's impossible for each host to set its own endpoint attribute like listening udp port. Another problem is how to make mutiple tunnel backends co-working, suppose we have a hybrid networking infrastructure: some vtep is based on Openvswitch, some on Tor switch of Vendor A, and some on Tor switch of Vendor B. While mechanism driver of A does know vtep information of all virtual ports connected to Tor switchs of A by its own way, it has no way to learn vtep information of virtual ports connected to Tor switchs of B, and vice versa. Also for l2pop MD for OVS, it knows nothing about vtep information of ports connected to TOR switch of both A & B. So the tunnel is broken into three isolated islands. Based on above use cases, I think we need a shared, standard data model to store vtep information across different backends. Each backend save vtep into this store, and fetch vtep information of other backends, thus the cross-backend population is possible. For agent-based tunnel backends, we can get termination info from compute host, but for agentless backends, where do we store the info? So I think it is reasonable to extend port property to store the termination info, for those Ports which don't have an overlay termination info, we can set the new property None.
2015-12-15 09:07:26 shihanzhang description For agent-based tunnel backends like ovs/linux bridge, VMs sending traffic on VXLAN networks have to derive a lot of information (agent IP address where VM is and VXLAN port that agent will be listening on), now l2pop driver gets these information from 'port -> binding:host -> agent -> agent configurations -> tunneling_ip', it mandate all agents setting the same VXLAN port in config file, while it's impossible for each host to set its own endpoint attribute like listening udp port. Another problem is how to make mutiple tunnel backends co-working, suppose we have a hybrid networking infrastructure: some vtep is based on Openvswitch, some on Tor switch of Vendor A, and some on Tor switch of Vendor B. While mechanism driver of A does know vtep information of all virtual ports connected to Tor switchs of A by its own way, it has no way to learn vtep information of virtual ports connected to Tor switchs of B, and vice versa. Also for l2pop MD for OVS, it knows nothing about vtep information of ports connected to TOR switch of both A & B. So the tunnel is broken into three isolated islands. Based on above use cases, I think we need a shared, standard data model to store vtep information across different backends. Each backend save vtep into this store, and fetch vtep information of other backends, thus the cross-backend population is possible. For agent-based tunnel backends, we can get termination info from compute host, but for agentless backends, where do we store the info? So I think it is reasonable to extend port property to store the termination info, for those Ports which don't have an overlay termination info, we can set the new property None. For agent-based tunnel backends like ovs/linux bridge, VMs sending traffic on VXLAN networks have to derive a lot of information (agent IP address where VM is and VXLAN port that agent will be listening on), now l2pop driver gets these information from 'port -> binding:host -> agent -> agent configurations -> tunneling_ip', it mandate all agents setting the same VXLAN port in config file, while it's impossible for each host to set its own endpoint attribute like listening udp port. Another problem is how to make mutiple tunnel backends co-working, suppose we have a hybrid networking infrastructure: some vtep is based on Openvswitch, some on Tor switch of Vendor A, and some on Tor switch of Vendor B. While mechanism driver of A does know vtep information of all virtual ports connected to Tor switchs of A by its own way, it has no way to learn vtep information of virtual ports connected to Tor switchs of B, and vice versa. Also for l2pop MD for OVS, it knows nothing about vtep information of ports connected to TOR switch of both A & B. So the tunnel is broken into three isolated islands. Based on above use cases, I think we need a shared, standard data model to store vtep information across different backends. Each backend save vtep into this store, and fetch vtep information of other backends, thus the cross-backend population is possible.
2016-01-13 03:27:25 Armando Migliaccio neutron: assignee Dong Liu (liudong78)