Doing a interface-detach shuts down the VM

Bug #1323204 reported by Vedamurthy Joshi
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
Status tracked in Trunk
R1.1
Fix Committed
High
Nagabhushana R
Trunk
Fix Committed
High
Nagabhushana R

Bug Description

Build : 1.10main-2189

On doing a interface-detach, even though the command completes successfully, in the compute logs, it says that the interface detach has failed.

The VM is moved to Shutoff state. Tried with both cirros and ubuntu vms.

Also, the port is removed as well.

root@nodec34:/var/log/nova# nova interface-detach VeduTest1_ubuntu_vm 6d9bed5c-5b43-4912-a4d9-0d8481333edd
root@nodec34:/var/log/nova#

2014-05-26 13:16:27.178 27978 ERROR nova.virt.libvirt.driver [req-b04ec504-b9a6-4477-bc43-6f385ebdbade 925c9e7872064722a5a8720e0955abbf 319f38f1aedb4b699243eb4e08594af9] [instance: 746cfaeb-6b73-4108-a6f0-d547be5eaf91] detaching network adapter failed.
2014-05-26 13:16:27.179 27978 ERROR nova.openstack.common.rpc.amqp [req-b04ec504-b9a6-4477-bc43-6f385ebdbade 925c9e7872064722a5a8720e0955abbf 319f38f1aedb4b699243eb4e08594af9] Exception during message handling
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp **args)
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3902, in detach_interface
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp self.driver.detach_interface(instance, condemned)
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1281, in detach_interface
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp raise exception.InterfaceDetachFailed(instance)
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp InterfaceDetachFailed: {u'vm_state': u'active', u'availability_zone': None, u'terminated_at': None, u'ephemeral_gb': 0, u'instance_type_id': 5, u'user_data': None, u'cleaned': False, u'vm_mode': None, u'deleted_at': None, u'reservation_id': u'r-pku96sxd', u'id': 29, u'security_groups': [], u'disable_terminate': False, u'display_name': u'VeduTest1_ubuntu_vm', u'uuid': u'746cfaeb-6b73-4108-a6f0-d547be5eaf91', u'default_swap_device': None, u'info_cache': {u'instance_uuid': u'746cfaeb-6b73-4108-a6f0-d547be5eaf91', u'network_info': [{u'ovs_interfaceid': None, u'network': {u'bridge': None, u'subnets': [{u'ips': [{u'floating_ips': [], u'meta': {}, u'type': u'fixed', u'version': 4, u'address': u'20.1.1.253'}], u'version': 4, u'meta': {}, u'dns': [{u'meta': {}, u'type': u'dns', u'version': 4, u'address': u'169.254.169.254'}], u'routes': [], u'cidr': u'20.1.1.0/24', u'gateway': {u'meta': {}, u'type': u'gateway', u'version': 4, u'address': u'20.1.1.254'}}], u'meta': {u'injected': False, u'tenant_id': u'319f38f1aedb4b699243eb4e08594af9'}, u'id': u'e99a37a5-2cb1-40fe-8efc-2c6414b3d140', u'label': u'vn2'}, u'devname': u'tap6d9bed5c-5b', u'qbh_params': None, u'meta': {}, u'address': u'02:6d:9b:ed:5c:5b', u'type': u'vrouter', u'id': u'6d9bed5c-5b43-4912-a4d9-0d8481333edd', u'qbg_params': None}]}, u'hostname': u'vedutest1-ubuntu-vm', u'launched_on': u'nodec34', u'display_description': u'VeduTest1_ubuntu_vm', u'key_data': None, u'kernel_id': u'', u'power_state': 1, u'default_ephemeral_device': None, u'progress': 0, u'project_id': u'319f38f1aedb4b699243eb4e08594af9', u'launched_at': u'2014-05-26T07:32:06.000000', u'scheduled_at': u'2014-05-26T07:32:03.000000', u'node': u'nodec34.englab.juniper.net', u'ramdisk_id': u'', u'access_ip_v6': None, u'access_ip_v4': None, u'deleted': False, u'key_name': None, u'updated_at': u'2014-05-26T07:32:06.000000', u'host': u'nodec34', u'architecture': None, u'user_id': u'925c9e7872064722a5a8720e0955abbf', u'system_metadata': {u'image_min_disk': u'20', u'instance_type_memory_mb': u'2048', u'instance_type_swap': u'0', u'instance_type_vcpu_weight': None, u'instance_type_root_gb': u'20', u'instance_type_id': u'5', u'instance_type_name': u'm1.small', u'instance_type_ephemeral_gb': u'0', u'instance_type_rxtx_factor': u'1', u'instance_type_flavorid': u'2', u'image_container_format': u'ovf', u'instance_type_vcpus': u'1', u'image_min_ram': u'0', u'image_disk_format': u'qcow2', u'image_base_image_ref': u'd40ea5b4-fe45-4142-881f-a1bb0376947e'}, u'task_state': None, u'shutdown_terminate': False, u'cell_name': None, u'root_gb': 20, u'locked': False, u'name': u'instance-0000001d', u'created_at': u'2014-05-26T07:32:03.000000', u'locked_by': None, u'launch_index': 0, u'metadata': {}, u'memory_mb': 2048, u'vcpus': 1, u'image_ref': u'd40ea5b4-fe45-4142-881f-a1bb0376947e', u'root_device_name': u'/dev/vda', u'auto_disk_config': False, u'os_type': None, u'config_drive': u''}
2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
2014-05-26 13:16:27.180 27978 INFO nova.compute.manager [-] Lifecycle event 1 on VM 746cfaeb-6b73-4108-a6f0-d547be5eaf91
2014-05-26 13:16:27.356 27978 WARNING nova.compute.manager [-] [instance: 746cfaeb-6b73-4108-a6f0-d547be5eaf91] Instance shutdown by itself. Calling the stop API.
2014-05-26 13:16:27.687 27978 INFO nova.virt.libvirt.driver [-] [instance: 746cfaeb-6b73-4108-a6f0-d547be5eaf91] Instance destroyed successfully.

Changed in juniperopenstack:
assignee: nobody → Sachin Bansal (bansalsachin)
importance: Undecided → High
milestone: none → r1.10-beta
tags: added: blocker config
Changed in juniperopenstack:
milestone: r1.10-beta → r1.06-fcs
no longer affects: juniperopenstack/r1.06
information type: Proprietary → Public
Changed in juniperopenstack:
assignee: Sachin Bansal (bansalsachin) → prasad miriyala (pmiriyala)
Revision history for this message
prasad miriyala (pmiriyala) wrote :

Replaced modified contrail libvirtd(0.9.8 +) with normal libvirtd (0.9.8), then interface detach goes through fine. i.e., VM's interface is gone and VM up and running after interface-detach.
/Prasad

Changed in juniperopenstack:
assignee: prasad miriyala (pmiriyala) → Raja Sivaramakrishnan (raja-u)
Sachin Bansal (sbansal)
tags: removed: config
Revision history for this message
Vedamurthy Joshi (vedujoshi) wrote : Re: [Bug 1323204] Re: Doing a interface-detach shuts down the VM
Download full text (6.5 KiB)

Raja, Prasad,
 Is there any update on this ?
On latest 1.10, we still have this issue.

Vedu

On 6/18/14, 3:40 AM, "prasad miriyala" <email address hidden> wrote:

>Replaced modified contrail libvirtd(0.9.8 +) with normal libvirtd
>(0.9.8), then interface detach goes through fine. i.e., VM's interface is
>gone and VM up and running after interface-detach.
>/Prasad
>
>** Changed in: juniperopenstack
> Assignee: prasad miriyala (pmiriyala) => Raja Sivaramakrishnan
>(raja-u)
>
>--
>You received this bug notification because you are a member of Contrail
>Systems engineering, which is subscribed to Juniper Openstack.
>https://bugs.launchpad.net/bugs/1323204
>
>Title:
> Doing a interface-detach shuts down the VM
>
>Status in Juniper Openstack distribution:
> New
>
>Bug description:
> Build : 1.10main-2189
>
> On doing a interface-detach, even though the command completes
> successfully, in the compute logs, it says that the interface detach
> has failed.
>
> The VM is moved to Shutoff state. Tried with both cirros and ubuntu
> vms.
>
> Also, the port is removed as well.
>
> root@nodec34:/var/log/nova# nova interface-detach VeduTest1_ubuntu_vm
>6d9bed5c-5b43-4912-a4d9-0d8481333edd
> root@nodec34:/var/log/nova#
>
>
> 2014-05-26 13:16:27.178 27978 ERROR nova.virt.libvirt.driver
>[req-b04ec504-b9a6-4477-bc43-6f385ebdbade
>925c9e7872064722a5a8720e0955abbf 319f38f1aedb4b699243eb4e08594af9]
>[instance: 746cfaeb-6b73-4108-a6f0-d547be5eaf91] detaching network
>adapter failed.
> 2014-05-26 13:16:27.179 27978 ERROR nova.openstack.common.rpc.amqp
>[req-b04ec504-b9a6-4477-bc43-6f385ebdbade
>925c9e7872064722a5a8720e0955abbf 319f38f1aedb4b699243eb4e08594af9]
>Exception during message handling
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>Traceback (most recent call last):
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>File
>"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
>line 461, in _process_data
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>**args)
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>File
>"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py"
>, line 172, in dispatch
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>result = getattr(proxyobj, method)(ctxt, **kwargs)
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
>3902, in detach_interface
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>self.driver.detach_interface(instance, condemned)
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line
>1281, in detach_interface
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>raise exception.InterfaceDetachFailed(instance)
> 2014-05-26 13:16:27.179 27978 TRACE nova.openstack.common.rpc.amqp
>InterfaceDetachFailed: {u'vm_state': u'active', u'availability_zone':
>None, u'terminated_at': None, u'ephemeral_gb': 0, u'instance_type_id': 5,
>u'user_data': None, ...

Read more...

Revision history for this message
Raja Sivaramakrishnan (raja-u) wrote :

Hi Vedu,
    We still have this issue in 1.1.

Raja

On 7/26/14, 1:43 PM, "Vedamurthy Ananth Joshi" <email address hidden> wrote:

>https://bugs.launchpad.net/bugs/1323204
>

tags: added: neutronapi
no longer affects: juniperopenstack/r1.1
Changed in juniperopenstack:
milestone: r1.06-fcs → r1.10-fcs
Revision history for this message
Ashish Ranjan (aranjan-n) wrote :

Need to look at after upgrading the ubuntu kernel version.

Changed in juniperopenstack:
assignee: Raja Sivaramakrishnan (raja-u) → Anirban Chakraborty (abchak)
Changed in juniperopenstack:
milestone: r1.10-fcs → r1.11
status: New → Confirmed
Revision history for this message
Ashish Ranjan (aranjan-n) wrote :

We have to work on libvirt enhancement for NET_TYPE_ETHERNET. Seems to be related that. Qos settings also doesn't work today.
We will target this in 1.11.

Notes below from dev.

Looks like an existing bug in libvirt. I don’t think we have a choice
other than using NET_TYPE_ETHERNET
as the other options assume Linux bridge/OVS/PCI passthrough.
NET_DEV_ETHERNET is supposed to be for other
scenarios (like vrouter). Unfortunately, libvirt support for this type
doesn’t seem to be as high a
priority as the other ones. We will have to fix this and send the patch to
the libvirt team to upstream.

tags: added: releasenote
Revision history for this message
Anirban Chakraborty (abchak) wrote :

A fix added to contrailvif driver. The root cause of the issue is that the tap device is deleted before the vhost-net could be freed up from libvirt. When libvirt initializes vhost-net, the corresponding tap device already exists, however, during deletion path, tap device is freed up first followed by attempted vhost-net deallocation by libvirt. This results in qemu core. It seems to be a known issue (a similar bug exists, https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=1004608). The fix is to defer the deletion of tap device from contrailvif driver to a later time so that libvirt could free up vhost-net.

Revision history for this message
Vedamurthy Joshi (vedujoshi) wrote :

Anirban,
Can you please add the commit urls for the same ?

Revision history for this message
Anirban Chakraborty (abchak) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.