[RFE] ovs port status should the same as physnet.

Bug #1575146 reported by Yan Songming on 2016-04-26
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Wishlist
Carlos Goncalves

Bug Description

In some case, when physnet is down. VM should know the status of it. But now we don't.

So mybe we will add a function of this. Maybe we should add a configure option.

When 'True', the port in VM should be down when the physnet in host is down.

Tags: rfe Edit Tag help
Changed in neutron:
assignee: nobody → Yan Songming (songmingyan)
Doug Wiegley (dougwig) on 2016-04-26
Changed in neutron:
importance: Undecided → Wishlist
status: New → Confirmed

How do you describe the state of a physnet?

tags: added: rfe
summary: - ovs port status should the same as tap.
+ [RFE] ovs port status should the same as tap.
summary: - [RFE] ovs port status should the same as tap.
+ [RFE] ovs port status should the same as physnet.
description: updated
Yan Songming (songmingyan) wrote :

If physnet1 is "eth1", i think use can just use "ifconfig eth1" to comfirm the status of physnet1.
The port of vm should have the infomation of it's net and physnet.So, we can find the relationship between them.
Then we can make the port status which use for vm is down when the physnet1 is down or not according to the configure option in neutron.conf.

This is usefull for some user of VM who need know the packet they send shoul d send out of the host or not.

So you're talking about the host connectivity to the physical network, which is not quite the same thing. By the way ifconfig is deprecated.

Changed in neutron:
status: Confirmed → Incomplete
assignee: Yan Songming (songmingyan) → nobody
description: updated
Yan Songming (songmingyan) wrote :

I am talk about how to let the vm which use tap port to know the packet can actually send out of the host or not.
For example:
We create a vm_1 which use port_1.It will make ovs create a tap port: tap1.
The port_1 is create used net_1.
The net_1's physnet is eth1.
When eth1 is down. The vm_1 can't know the status of it.
It also send the packet to port_1,but actually port_1 can't send the packet use eth1 because the port is down.
This may make some mistake of the service in vm_1.

So we should let the vm_1 know that the port_1 can't used in this case.
I think we should add the status linkage of tap1 and eth1.
Just link sriov. When the pf is down , the vf should down.

Also in some case, we have two vms, beside vm_1, another vm is vm_2
Vm_2 use port_2.It will make ovs create a tap port: tap2.
The port_2 is create used net_2.
The net_2's phsynet is eth1.
When eth1 is down. It will not affect the communication of vm_1 and vm_2
In this case tap1 and tap2 should not down when eth1 is down.

So i believe we need a bp to add the status linkage of tap port and phy port.
We also need a configure option to decide the status of this function.

Changed in neutron:
status: Incomplete → Confirmed

I understand that the host link status may lead to failures, but we should have management tools whose job is to monitor/trigger alerts, fence hosts, etc.

Exposing this level of complexity via Neutron seems a bit too late in the error detection process.

Yan Songming (songmingyan) wrote :

I agree with you that there should be a tools or monitor, but so far as i know there is no component to do these. If this tools or monitor can reflect the physnet status to the tap port.It not only need to know the relationship between the physnet and the net, but also need to know the relationship between the net and the port. Only neutron know these relationships. So i think only neutron can do these things. Maybe we should add a network-monitor or something like that cat get this information through neutron api but i don't think this is a easy way to resolve this problem.

Changed in neutron:
status: Confirmed → Triaged

We're considering whether solution to bug 1598081 may come handy to address this issue. We are partially reluctant to add deployment/topology complexity into Neutron right now and it might be easier to allow third parties to trigger a status change on the port based on their own logic and orchestration. However, depending on the level of pluggability of [1], it might be possible to address this type of need by means of the diagnostics API/framework (still in the works). Please review the logs and provide further feedback.

I'll let Ihar chime in with further comments.

[1] https://blueprints.launchpad.net/neutron/+spec/troubleshooting

I guess the end result of the discussion in irc is:

- ports will receive a new field for link status in addition to existing 'status' that will, for the first iteration, not be available for PUT via REST API. It would be available for update from inside core and service plugins though.
- in the future, we may consider a use case for external parties to update the new port field via REST API.
- in regards to the new diagnostics API currently in design, it could utilize the new link status field to implement some of its checks. It's TBD whether it will be useful for that and is not in scope for this RFE.

And we will need a spec for that.

Carlos Goncalves (cgoncalves) wrote :

Ihrar, I should be done with a spec for bug #1598081 really soon (awaiting one more +2 from OPNFV Doctor; in the meantime you can see it here: https://gerrit.opnfv.org/gerrit/#/c/17629/). The spec proposes a new field for link status (named differently, we can discuss during review in neutron). I had the impression we were okay with PUT via REST API for that new field, no?

Fix proposed to branch: master
Review: https://review.openstack.org/351675

Changed in neutron:
assignee: nobody → Carlos Goncalves (cgoncalves)
status: Triaged → In Progress
Yan Songming (songmingyan) wrote :

I agree with that will should add a new field for link status,but why not be available for PUT via REST API? I think we can add the api to allowed external parties to update the new port field now.

Reviewed: https://review.openstack.org/351675
Committed: https://git.openstack.org/cgit/openstack/neutron-specs/commit/?id=e919701c29d556eb9eeb014180d854c04bb46775
Submitter: Jenkins
Branch: master

commit e919701c29d556eb9eeb014180d854c04bb46775
Author: Carlos Goncalves <email address hidden>
Date: Fri Aug 5 13:46:33 2016 +0200

    Port data plane status

    Partial-Bug: #1598081
    Partial-Bug: #1575146

    Change-Id: I1a832c1e3dd0f8ce27780518d5a4876c0e19dd16

Reviewed: https://review.openstack.org/424868
Committed: https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=ee74cb2a5ccdc13e8bf137d7387f01c6b202c150
Submitter: Jenkins
Branch: master

commit ee74cb2a5ccdc13e8bf137d7387f01c6b202c150
Author: Carlos Goncalves <email address hidden>
Date: Tue Jan 24 21:52:27 2017 +0000

    API definition and reference for data plane status extension

    Related-Bug: #1598081
    Related-Bug: #1575146

    Partial-Implements: blueprint port-data-plane-status

    Change-Id: I04eef902b3310f799b1ce7ea44ed7cf77c74da04

Reviewed: https://review.openstack.org/424340
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=89de63de05e296af583032cb17a3d76b4b4d6a40
Submitter: Jenkins
Branch: master

commit 89de63de05e296af583032cb17a3d76b4b4d6a40
Author: Carlos Goncalves <email address hidden>
Date: Mon Jan 23 19:53:04 2017 +0000

    Port data plane status extension implementation

    Implements the port data plane status extension. Third parties
    can report via Neutron API issues in the underlying data plane
    affecting connectivity from/to Neutron ports.

    Supported statuses:
      - None: no status being reported; default value
      - ACTIVE: all is up and running
      - DOWN: no traffic can flow from/to the Neutron port

    Setting attribute available to admin or any user with specific role
    (default role: data_plane_integrator).

    ML2 extension driver loaded on request via configuration:

      [ml2]
      extension_drivers = data_plane_status

    Related-Bug: #1598081
    Related-Bug: #1575146

    DocImpact: users can get status of the underlying port data plane;
    attribute writable by admin users and users granted the
    'data-plane-integrator' role.
    APIImpact: port now has data_plane_status attr, set on port update

    Implements: blueprint port-data-plane-status

    Depends-On: I04eef902b3310f799b1ce7ea44ed7cf77c74da04
    Change-Id: Ic9e1e3ed9e3d4b88a4292114f4cb4192ac4b3502

Changed in neutron:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Related blueprints