VM goes in error state if created after ovsvapp restart

Bug #1679909 reported by Aman Kumar
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
networking-vsphere
Fix Released
Undecided
Aman Kumar

Bug Description

VMs go in error state if VMs are created after ovsvapp restart
1.In Automation test suite many tests have failed due to VMs going in error state .
Below sequence of event happens .
1.Create network and boot vm in that . this works fine .
2.Delete VMs
2. Now Restart ovsvapp agents , and restarting ovsvapp deletes the portgroup
3. Boot VM in same network , it fails to boot . It goes in error state because portgroup does not get created.

Aman Kumar (amank)
Changed in networking-vsphere:
assignee: nobody → Aman Kumar (amank)
Changed in networking-vsphere:
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to networking-vsphere (master)

Reviewed: https://review.openstack.org/451782
Committed: https://git.openstack.org/cgit/openstack/networking-vsphere/commit/?id=fa858c98ba5d3c3166023865bdfc102e1631b565
Submitter: Jenkins
Branch: master

commit fa858c98ba5d3c3166023865bdfc102e1631b565
Author: Aman Kumar <email address hidden>
Date: Thu Mar 23 03:02:03 2017 -0700

    VM goes in error state if created after ovsvapp restart

    VMs are going into error state because the lvid entries
    are not cleared in all the ovsvapp agents when we delete
    all the vms from vcenter and restarting one of the
    ovsvapp-agent from a cluster. So this patch of code
    deletes the port-group when the network_port_count
    becomes zero and informs all the agents in a cluster
    by updating the lvid.

    Closes-Bug: #1679909
    Co-Authored-By: Priyanka J <email address hidden>

    Change-Id: I0d1786ed3256b436ee9d3021a52f25b5f1e65d09

Changed in networking-vsphere:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to networking-vsphere (stable/ocata)

Fix proposed to branch: stable/ocata
Review: https://review.openstack.org/453465

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to networking-vsphere (stable/mitaka)

Fix proposed to branch: stable/mitaka
Review: https://review.openstack.org/453512

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to networking-vsphere (stable/mitaka)

Reviewed: https://review.openstack.org/453512
Committed: https://git.openstack.org/cgit/openstack/networking-vsphere/commit/?id=ee9fe812bb99af7c5bf7270be25a2717f7bb1ebc
Submitter: Jenkins
Branch: stable/mitaka

commit ee9fe812bb99af7c5bf7270be25a2717f7bb1ebc
Author: Aman Kumar <email address hidden>
Date: Thu Mar 23 03:02:03 2017 -0700

    VM goes in error state if created after ovsvapp restart

    VMs are going into error state because the lvid entries
    are not cleared in all the ovsvapp agents when we delete
    all the vms from vcenter and restarting one of the
    ovsvapp-agent from a cluster. So this patch of code
    deletes the port-group when the network_port_count
    becomes zero and informs all the agents in a cluster
    by updating the lvid.

    Closes-Bug: #1679909
    Co-Authored-By: Priyanka J <email address hidden>

    (cherry picked from commit fa858c98ba5d3c3166023865bdfc102e1631b565)

    Conflicts:
     networking_vsphere/ml2/ovsvapp_mech_driver.py

    Change-Id: I0d1786ed3256b436ee9d3021a52f25b5f1e65d09

tags: added: in-stable-mitaka
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to networking-vsphere (stable/ocata)

Reviewed: https://review.openstack.org/453465
Committed: https://git.openstack.org/cgit/openstack/networking-vsphere/commit/?id=bcff8e5b9f06780b6964980d1b1f911ceada2198
Submitter: Jenkins
Branch: stable/ocata

commit bcff8e5b9f06780b6964980d1b1f911ceada2198
Author: Aman Kumar <email address hidden>
Date: Thu Mar 23 03:02:03 2017 -0700

    VM goes in error state if created after ovsvapp restart

    VMs are going into error state because the lvid entries
    are not cleared in all the ovsvapp agents when we delete
    all the vms from vcenter and restarting one of the
    ovsvapp-agent from a cluster. So this patch of code
    deletes the port-group when the network_port_count
    becomes zero and informs all the agents in a cluster
    by updating the lvid.

    Also changed the typo in tox.ini file

    Closes-Bug: #1679909
    Co-Authored-By: Priyanka J <email address hidden>

    Change-Id: I0d1786ed3256b436ee9d3021a52f25b5f1e65d09
    (cherry picked from commit fa858c98ba5d3c3166023865bdfc102e1631b565)

tags: added: in-stable-ocata
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to networking-vsphere (stable/newton)

Fix proposed to branch: stable/newton
Review: https://review.openstack.org/459948

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to networking-vsphere (stable/newton)

Reviewed: https://review.openstack.org/459948
Committed: https://git.openstack.org/cgit/openstack/networking-vsphere/commit/?id=7ee41fe55d9b536d63c2cd1b53cf2c758a42d059
Submitter: Jenkins
Branch: stable/newton

commit 7ee41fe55d9b536d63c2cd1b53cf2c758a42d059
Author: Aman Kumar <email address hidden>
Date: Thu Mar 23 03:02:03 2017 -0700

    VM goes in error state if created after ovsvapp restart

    VMs are going into error state because the lvid entries
    are not cleared in all the ovsvapp agents when we delete
    all the vms from vcenter and restarting one of the
    ovsvapp-agent from a cluster. So this patch of code
    deletes the port-group when the network_port_count
    becomes zero and informs all the agents in a cluster
    by updating the lvid.

    Closes-Bug: #1679909
    Co-Authored-By: Priyanka J <email address hidden>

    (cherry picked from commit fa858c98ba5d3c3166023865bdfc102e1631b565)

    Conflicts:
     networking_vsphere/ml2/ovsvapp_mech_driver.py

    Change-Id: I0d1786ed3256b436ee9d3021a52f25b5f1e65d09

tags: added: in-stable-newton
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.