Activity log for bug #1458890

Date Who What changed Old value New value Message
2015-05-26 13:55:41 Kyle Mestery bug added bug
2015-05-26 13:55:54 Kyle Mestery tags rfe
2015-05-26 14:05:15 Nobuto Murata bug added subscriber Nobuto Murata
2015-05-26 14:06:32 Tim Bell bug added subscriber Tim Bell
2015-05-26 14:06:37 Mike Dorman bug added subscriber Mike Dorman
2015-05-27 00:25:07 Eugene Nikanorov neutron: status New Opinion
2015-05-27 01:24:46 Kyle Mestery neutron: status Opinion Confirmed
2015-05-27 07:44:38 Shintaro Mizuno bug added subscriber Shintaro Mizuno
2015-05-27 08:03:36 Masanori Itoh bug added subscriber Masanori Itoh
2015-05-27 08:42:46 Toshikazu Ichikawa bug added subscriber Toshikazu Ichikawa
2015-06-02 13:18:01 Nell Jerram bug added subscriber Neil Jerram
2015-06-04 15:58:37 vishwanath jayaraman bug added subscriber vishwanath jayaraman
2015-06-04 22:56:38 Cedric Brandily bug added subscriber Cedric Brandily
2015-06-08 03:57:52 Itsuro Oda bug added subscriber Itsuro Oda
2015-06-09 09:07:42 Ed Harrison bug added subscriber Ed Harrison
2015-06-17 04:34:24 Sam Morrison bug added subscriber Sam Morrison
2015-06-17 16:00:27 Kevin Fox bug added subscriber Kevin Fox
2015-06-17 16:01:26 Kyle Mestery neutron: status Confirmed Triaged
2015-06-17 18:07:02 Andy Hill bug added subscriber Andy Hill
2015-06-18 06:36:05 gustavo panizzo bug added subscriber gustavo panizzo
2015-06-18 16:20:13 Belmiro Moreira bug added subscriber Belmiro Moreira
2015-06-22 16:12:04 Adam Huffman bug added subscriber Adam Huffman
2015-07-21 20:47:06 Carl Baldwin description This is feedback from the Vancouver OpenStack Summit. During the large deployment team (Go Daddy, Yahoo!, NeCTAR, CERN, Rackspace, HP, BlueBox, among others) meeting, there was a discussion of network architectures that we use to deliver Openstack. As we talked it became clear that there are a number of challenges around networking. In many cases, our data center networks are architected with a differientation between layer 2 and layer 3. Said another way, there are distinct network "segments" which are only available to a subset of compute hosts. These topolgies are typically necessary to manage network resource capacity (IP addresses, broadcast domain size, ARP tables, etc.) Network topologies like these are not possible to describe with Neutron constructs today. The traditional solution to this is tunneling and overlay networks which makes all networks available everywhere in the data center. However, overlay networks represent a large increase in complexity that can be very difficult to troubleshoot. For this reason, many large deployers are not using overlay networks at all (or only for specific use cases like private tenant networks.) Beacuse Neutron does not have constructs that accurately describe our network architectures, we'd like to see the notion of a network "segment" in Neutron. A "segment" could mean a L2 domain, IP block boundary, or other partition. Operators could use this new construct to build accurate models of network topology within Neutron, making it much more usable. Example: The typical use case is L2 segments that are restrained to a single rack (or some subnet of compute hosts), but are still part of a larger L3 network. In this case, the overall Neutron network would describe the L3 network, and the network segments would be used to describe the L2 segments. WIth the network segment construct (which are not intended to be exposed to end users ), there is also a need for some scheduling logic around placement and addressing of instances on an appropriate network segment based on availablity and capacity. This also implies a means via API to report IP capacity of networks and segments, so we can filter out segments without capacity and the compute nodes that are tied to those segments. Example: The end user chooses the Neutron network for their instance, which is actually comprised of several lower level network segments within Neutron. Scheduling must be done such that the network segment chosen for the instance is available to the compute node on which the instance is placed. Additionally, the network segment that's chosen must have available IP capacity in order for the instance to be placed there. Also, the scheduling for resize, migrate, ... should only consider the compute nodes allowed in the "network segment" where the VM is placed. This is feedback from the Vancouver OpenStack Summit. During the large deployment team (Go Daddy, Yahoo!, NeCTAR, CERN, Rackspace, HP, BlueBox, among others) meeting, there was a discussion of network architectures that we use to deliver Openstack. As we talked it became clear that there are a number of challenges around networking. In many cases, our data center networks are architected with a differientation between layer 2 and layer 3. Said another way, there are distinct network "segments" which are only available to a subset of compute hosts. These topolgies are typically necessary to manage network resource capacity (IP addresses, broadcast domain size, ARP tables, etc.) Network topologies like these are not possible to describe with Neutron constructs today. The traditional solution to this is tunneling and overlay networks which makes all networks available everywhere in the data center. However, overlay networks represent a large increase in complexity that can be very difficult to troubleshoot. For this reason, many large deployers are not using overlay networks at all (or only for specific use cases like private tenant networks.) Beacuse Neutron does not have constructs that accurately describe our network architectures, we'd like to see the notion of a network "segment" in Neutron. A "segment" could mean a L2 domain, IP block boundary, or other partition. Operators could use this new construct to build accurate models of network topology within Neutron, making it much more usable.     Example: The typical use case is L2 segments that are restrained to a single rack (or some subnet of compute hosts), but are still part of a larger L3 network. In this case, the overall Neutron network would describe the L3 network, and the network segments would be used to describe the L2 segments. WIth the network segment construct (which are not intended to be exposed to end users ), there is also a need for some scheduling logic around placement and addressing of instances on an appropriate network segment based on availablity and capacity. This also implies a means via API to report IP capacity of networks and segments, so we can filter out segments without capacity and the compute nodes that are tied to those segments.     Example: The end user chooses the Neutron network for their instance, which is actually comprised of several lower level network segments within Neutron. Scheduling must be done such that the network segment chosen for the instance is available to the compute node on which the instance is placed. Additionally, the network segment that's chosen must have available IP capacity in order for the instance to be placed there. Also, the scheduling for resize, migrate, ... should only consider the compute nodes allowed in the "network segment" where the VM is placed. https://etherpad.openstack.org/p/Network_Segmentation_Usecases
2015-10-20 15:57:32 Armando Migliaccio tags rfe rfe-approved
2015-10-28 00:47:53 Mathieu Gagné bug added subscriber Mathieu Gagné
2015-11-11 21:21:50 Armando Migliaccio neutron: importance Undecided Wishlist
2015-11-16 06:30:58 Steve Ruan bug added subscriber steve
2015-11-16 21:20:09 lukas bug added subscriber lukas
2015-11-20 01:50:49 Armando Migliaccio neutron: milestone mitaka-1
2015-11-20 02:42:40 Armando Migliaccio neutron: assignee Carl Baldwin (carl-baldwin)
2015-12-03 19:20:17 Armando Migliaccio neutron: milestone mitaka-1 mitaka-2
2016-01-18 23:36:32 Carl Baldwin neutron: milestone mitaka-2 mitaka-3
2016-01-18 23:38:09 Cedric Brandily neutron: milestone mitaka-3
2016-03-24 19:29:49 Henry Gessau summary Add segment support to Neutron [RFE] Add segment support to Neutron
2016-05-02 18:13:59 Han Zhou bug added subscriber Han Zhou
2020-05-29 12:48:22 Radosław Piliszek neutron: status Triaged Fix Released
2020-05-29 12:48:30 Radosław Piliszek bug added subscriber Radosław Piliszek