Network admin_state _up is not working

Bug #1237807 reported by ofer blaut
44
This bug affects 3 people
Affects Status Importance Assigned to Milestone
neutron
Won't Fix
Medium
Sylvain Afchain

Bug Description

By default network is created with admin_statue_up True

Network will not be deactivated when user issue update to admin_statue_up False , traffic from VMs still works and the network STAUS is still Active

[root@puma04 ~(keystone_admin_tenant1)]$neutron net-list
+--------------------------------------+-------------+----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-------------+----------------------------------------------------+
| 62e4a6ac-970e-49fe-81c1-1115dee7f800 | ext_net | 6dd43d99-e226-4e85-9471-89091c3ad031 |
| 78c1f62e-22c0-42e0-9377-e9d247862da4 | net-vlan201 | 534d328d-441b-4fe6-b4ba-3c397391bea2 89.66.66.0/24 |
+--------------------------------------+-------------+----------------------------------------------------+

[root@puma04 ~(keystone_admin_tenant1)]$neutron net-update 78c1f62e-22c0-42e0-9377-e9d247862da4 --admin_state_up False <<<<<<<<<<<
Updated network: 78c1f62e-22c0-42e0-9377-e9d247862da4
[root@puma04 ~(keystone_admin_tenant1)]$neutron net-show 78c1f62e-22c0-42e0-9377-e9d247862da4
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | False | <<<<<<
| id | 78c1f62e-22c0-42e0-9377-e9d247862da4 |
| name | net-vlan201 |
| router:external | False |
| shared | False |
| status | ACTIVE | <<<<<<<<<< Still ACTIVE
| subnets | 534d328d-441b-4fe6-b4ba-3c397391bea2 |
| tenant_id | c7431cff351542c28427028d9befe1fb |
+-----------------+--------------------------------------+
[root@puma04 ~(keystone_admin_tenant1)]$neutron net-update 78c1f62e-22c0-42e0-9377-e9d247862da4 --admin_state_up True <<<<<<<<<<
Updated network: 78c1f62e-22c0-42e0-9377-e9d247862da4
[root@puma04 ~(keystone_admin_tenant1)]$neutron net-show 78c1f62e-22c0-42e0-9377-e9d247862da4
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | True |
| id | 78c1f62e-22c0-42e0-9377-e9d247862da4 |
| name | net-vlan201 |
| router:external | False |
| shared | False |
| status | ACTIVE | <<<<<<<<<<<<
| subnets | 534d328d-441b-4fe6-b4ba-3c397391bea2 |
| tenant_id | c7431cff351542c28427028d9befe1fb |
+-----------------+--------------------------------------+

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

Hi Ofer,

did this happen with OVS or ML2 plugins?
I would like to tag the bug appropriately.

Revision history for this message
Sylvain Afchain (sylvain-afchain) wrote :

Hi Ofer, Salvatore,

I have tested this bug with the ML2 plugins, and the bug occurs.

Revision history for this message
Sylvain Afchain (sylvain-afchain) wrote :

Hi Ofer,

the status colmun returned by the net-show command is not the admin_status_up column, and your command to set the admin_status_up to false works correctly, confirmed by checking into the database.

Now, I don't know what It should be done in the case where the a network has the admin_state_up set to false.

The whole traffic of this network should be stopped ?

Revision history for this message
ofer blaut (oblaut) wrote :

Hi Sylvain
IMHO when admin_state_up is false that entity should be down, meaning network should be down.
otherwise what it the usage of admin_state_up ? same is true for port admin_state_up

Thanks

Ofer

tags: added: ml2
Changed in neutron:
importance: Undecided → High
status: New → Triaged
Changed in neutron:
assignee: nobody → Sylvain Afchain (sylvain-afchain)
Changed in neutron:
status: Triaged → In Progress
Revision history for this message
Nir Yechiel (nyechiel) wrote :

IMHO admin_state_up = false should bring the network down and the traffic is indeed needs to be stopped. This is of course a risky operation, so there should be some clear warning describing the action and asking the user to confirm that.

With regards to the implementation of this, I think that there is a difference between the network admin_state and the individual ports admin_state; setting the network admin_state to false should not change the port's state to false. Instead, I am in favor of the second solution [1] described by Sylvain in the ML.

/Nir

[1] do not change the admin_state_up value of ports and introduce a new field in the get_device_details rpc call in order to indicate that the admin_state_up of network is down and then set the port as dead

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

The administrative status for a network is a tricky attribute, as it implicitly involves changes in the state of other entities which belong to that network. In our case that's ports and dhcp services, but then there might be open questions on whether this should affect, for instance, the status of load balancers which services on the network down.

To make things even more complicated, what would happen when this operation is performed on an external network? Should the ext gw and all the floating IPs be shut down as well?

I know at least of a plugin for which it was decided to declare this attribute as unsupported.

However it is implemented, it is an operation which is not atomic (orchestrates several resources) and very likely to have side effects (changes in the state of resources different from the one for which the API request was sent).
Ensuring the implementation is semantically consistent across plugins would not be extremely easy as well.

Moreover, a strategy where the API emits a warning as a response and then asks for a confirmation in order to ensure the user has understood the risk of the operation looks beyond the scope of the API itself, and should perhaps implemented in clients such as horizon.

On the other I'd dare to suggest whether we should just deprecate this attribute and remove it from the API once the next version is released. This unless we have a strong and compelling use case for putting network administratively down (in my opinion putting all the ports down with a single call is not a compelling use case)

tags: added: api neutron-core
removed: ml2
Revision history for this message
Sylvain Afchain (sylvain-afchain) wrote :

I think the idea is to avoid traffic from or to any VMs attached to this network. So maybe a first fix could be to set the VMs port as dead when the network is set to down according to the suggestion I made in the mailing list [1]

[1] do not change the admin_state_up value of ports and introduce a new field in the get_device_details rpc call in order to indicate that the admin_state_up of network is down and then set the port as dead.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

I still have a feeling that we might be better off by not having an administrative status action for networks at all.

Changed in neutron:
milestone: none → juno-1
Kyle Mestery (mestery)
Changed in neutron:
milestone: juno-1 → juno-2
Kyle Mestery (mestery)
Changed in neutron:
milestone: juno-2 → none
Revision history for this message
Eugene Nikanorov (enikanorov) wrote :

So far the option proposed is to deprecate/remove the attribute from networks resource.
Another implied option is to leave things as is.

I'm marking the bug as incomplete to hear from participants preferred way to move forward with this.
Personally I tend to agree with Salvatore (to remove the attribute)

Changed in neutron:
status: In Progress → Incomplete
importance: High → Medium
Revision history for this message
Albert (albert-vico) wrote :

Hi,

In Havana and Icehouse, when you we set admint_state_up=FALSE in a network all the traffic was stopped, if this is going to change, as it is now, it should be clearly documented, since it's is a change in the expected behavior.

Revision history for this message
Rami Vaknin (rvaknin) wrote : Re: [Bug 1237807] Re: Network admin_state _up is not working
Download full text (3.8 KiB)

Hey,

I don't work on Openstack anymore, will it be ok if I'd ask you to discuss
it with the current redhat engineers?
בתאריך 18 בספט' 2014 18:51, "Albert" <email address hidden> כתב:

> Hi,
>
> In Havana and Icehouse, when you we set admint_state_up=FALSE in a
> network all the traffic was stopped, if this is going to change, as it
> is now, it should be clearly documented, since it's is a change in the
> expected behavior.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1237807
>
> Title:
> Network admin_state _up is not working
>
> Status in OpenStack Neutron (virtual network service):
> Incomplete
>
> Bug description:
> By default network is created with admin_statue_up True
>
> Network will not be deactivated when user issue update to
> admin_statue_up False , traffic from VMs still works and the network
> STAUS is still Active
>
>
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-list
>
> +--------------------------------------+-------------+----------------------------------------------------+
> | id | name | subnets
> |
>
> +--------------------------------------+-------------+----------------------------------------------------+
> | 62e4a6ac-970e-49fe-81c1-1115dee7f800 | ext_net |
> 6dd43d99-e226-4e85-9471-89091c3ad031 |
> | 78c1f62e-22c0-42e0-9377-e9d247862da4 | net-vlan201 |
> 534d328d-441b-4fe6-b4ba-3c397391bea2 89.66.66.0/24 |
>
> +--------------------------------------+-------------+----------------------------------------------------+
>
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-update
> 78c1f62e-22c0-42e0-9377-e9d247862da4 --admin_state_up False <<<<<<<<<<<
> Updated network: 78c1f62e-22c0-42e0-9377-e9d247862da4
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-show
> 78c1f62e-22c0-42e0-9377-e9d247862da4
> +-----------------+--------------------------------------+
> | Field | Value |
> +-----------------+--------------------------------------+
> | admin_state_up | False | <<<<<<
> | id | 78c1f62e-22c0-42e0-9377-e9d247862da4 |
> | name | net-vlan201 |
> | router:external | False |
> | shared | False |
> | status | ACTIVE | <<<<<<<<<<
> Still ACTIVE
> | subnets | 534d328d-441b-4fe6-b4ba-3c397391bea2 |
> | tenant_id | c7431cff351542c28427028d9befe1fb |
> +-----------------+--------------------------------------+
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-update
> 78c1f62e-22c0-42e0-9377-e9d247862da4 --admin_state_up True <<<<<<<<<<
> Updated network: 78c1f62e-22c0-42e0-9377-e9d247862da4
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-show
> 78c1f62e-22c0-42e0-9377-e9d247862da4
> +-----------------+--------------------------------------+
> | Field | Value |
> +---------------...

Read more...

Revision history for this message
Albert (albert-vico) wrote :

Hmm sure to whom I should talk? Could you forward me to them?

Revision history for this message
Rami Vaknin (rvaknin) wrote :
Download full text (3.7 KiB)

I don't know who handle these days, maybe ask on the bug in launchpad who
can take ownership on it.
בתאריך 19 בספט' 2014 11:21, "Albert" <email address hidden> כתב:

> Hmm sure to whom I should talk? Could you forward me to them?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1237807
>
> Title:
> Network admin_state _up is not working
>
> Status in OpenStack Neutron (virtual network service):
> Incomplete
>
> Bug description:
> By default network is created with admin_statue_up True
>
> Network will not be deactivated when user issue update to
> admin_statue_up False , traffic from VMs still works and the network
> STAUS is still Active
>
>
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-list
>
> +--------------------------------------+-------------+----------------------------------------------------+
> | id | name | subnets
> |
>
> +--------------------------------------+-------------+----------------------------------------------------+
> | 62e4a6ac-970e-49fe-81c1-1115dee7f800 | ext_net |
> 6dd43d99-e226-4e85-9471-89091c3ad031 |
> | 78c1f62e-22c0-42e0-9377-e9d247862da4 | net-vlan201 |
> 534d328d-441b-4fe6-b4ba-3c397391bea2 89.66.66.0/24 |
>
> +--------------------------------------+-------------+----------------------------------------------------+
>
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-update
> 78c1f62e-22c0-42e0-9377-e9d247862da4 --admin_state_up False <<<<<<<<<<<
> Updated network: 78c1f62e-22c0-42e0-9377-e9d247862da4
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-show
> 78c1f62e-22c0-42e0-9377-e9d247862da4
> +-----------------+--------------------------------------+
> | Field | Value |
> +-----------------+--------------------------------------+
> | admin_state_up | False | <<<<<<
> | id | 78c1f62e-22c0-42e0-9377-e9d247862da4 |
> | name | net-vlan201 |
> | router:external | False |
> | shared | False |
> | status | ACTIVE | <<<<<<<<<<
> Still ACTIVE
> | subnets | 534d328d-441b-4fe6-b4ba-3c397391bea2 |
> | tenant_id | c7431cff351542c28427028d9befe1fb |
> +-----------------+--------------------------------------+
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-update
> 78c1f62e-22c0-42e0-9377-e9d247862da4 --admin_state_up True <<<<<<<<<<
> Updated network: 78c1f62e-22c0-42e0-9377-e9d247862da4
> [root@puma04 ~(keystone_admin_tenant1)]$neutron net-show
> 78c1f62e-22c0-42e0-9377-e9d247862da4
> +-----------------+--------------------------------------+
> | Field | Value |
> +-----------------+--------------------------------------+
> | admin_state_up | True |
> | id | 78c1f62e-22c0-42e0-9377-e9d247862da4 |
> | name | net-vl...

Read more...

Revision history for this message
Yair Fried (yfried) wrote :

Since this discussion has implication on similar features in routers and agents, I'd to offer my opinion (uninformed as it might be):
Routers, networks (private, external, or shared), and agents provide bottle-necks allowing admins to shut down connectivity, the same way as, in the real world, a network-admin might shut down a switch or a router for temporary maintenance and bring it back up.
During shut-down the implied and expected result is that all traffic via bottle neck is broken.
So I believe that this feature is necessary and shouldn't be deprecated. Also, that all affected ports (VMs, floating ips, dhcp, etc...) should reflect this state in their status.

Revision history for this message
Yair Fried (yfried) wrote :

Also, from QA/tempest POV, exposing this API allows easier black-box testing in automation, without digging into implementation details like namespaces, etc...

Revision history for this message
Albert (albert-vico) wrote :

Completely agree with Yair Fried here, I stumbled into this while doing some tempest tests that were using this feature

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

@Yair: from comment #14 I sense that you are making wrong assumptions as to what a real world data center or cloud environment actually works.

A resource like 'network' has two status attributes: admin_status_up and status; these are not to be confused. As of now, flipping the admin_status_up from True to False has no effect in most plugins, it's just a toggle operation on the DB, and some plugins do not even support it (ie. barf at the update operation when/if tried, like the NSX plugin).

In my opinion this is the semantic:

admin_status_up: it's the management state of the resource. An admin can choose to disable/enable this resource for management purposes. Management plane downtime, must not cause a data plane loss. If we want to mirror this to the compute world, it's like when I put a hypervisor in maintenance mode: perhaps I need to evacuate the host and I cannot allow for more vm's to get spawned, or I need to do some sort of reactive maintenance and I don't want the management framework to fiddle with it while I am on it. If you think that management downtime == data plane downtime, you clearly have a wrong view of the real world!!

status: this is the status of the fabric, the data plane status if you will. Following the analogy with compute, this means that your host is either hosed, not feeling well, and something is broken; it pleas for troubleshooting!

Now, as for a Neutron network, the admin_status_up may not be entirely useful, but deprecating it or marking for removal is wrong. And even worse, forcing admin_status_up=False to cause a data plane loss.

Changed in neutron:
status: Incomplete → Won't Fix
Revision history for this message
Albert (albert-vico) wrote :

I'm sorry but I do not see your point, this feature was working in one way during havana, in icehouse+ was changed without notification, we can argue, about forgetting it and left if just for a kind of "info/debug" ok, but I still support Yair's point.
Networks are a just resources as well as routers for instance, and one should expect similar behavior with its admin_status.

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

Hi Albert: do you happen to know the the hash that introduced the change? To the best of my knowledge, this does not look like a regression. I don't understand your point about routers and keeping behaviors consistent, switching router's admin status to down does not affect the status of FIP, just like switching the network's admin status to down does not affect the status of ports.

Am I missing something?

Revision history for this message
Albert (albert-vico) wrote :

I was trying to explain that the admin_status_up should behave in the same manner for all the components, routers, ports, networks, subnetworks, etc. Which as I do understand it is that if it is false, it does not work. Since it is disabled. To be honest this change have bene introduced from havana to icehouse, and now we are in juno moving to kilo, so most of current users are aware that in the case of the network, admin_State_up does, nothing, which rises the question, of shall we remove it?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.