Volume detach in tempest fails because libvirt refuses connections

Bug #1251521 reported by Michael Still
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Medium
Unassigned

Bug Description

I just experienced this on https://review.openstack.org/#/c/55492/. It looks to me like the detach volume fails because libvirt has become unavailable:

2013-11-15 00:49:45.034 29876 DEBUG nova.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'_member_'], u'_context_request_id': u'req-0fdc657c-fdb3-4aef-96c6-c7d3c
6f18b33', u'_context_quota_class': None, u'_context_user_name': u'tempest.scenario.manager-tempest-1652099598-user', u'_context_project_name': u'tempest.scenario.manager-temp
est-1652099598-tenant', u'_context_service_catalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347', u're
gion': u'RegionOne', u'internalURL': u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347', u'serviceName': u'cinder', u'id': u'1a7219a8e4e543909f4b2a497810fa7c', u'pu
blicURL': u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347'}], u'type': u'volume', u'name': u'cinder'}], u'_context_tenant': u'49a55ed418d44af8b8104157045e8347', u
'_context_auth_token': '<SANITIZED>', u'args': {u'instance': {u'vm_state': u'active', u'availability_zone': None, u'terminated_at': None, u'ephemeral_gb': 0, u'instance_type_
id': 6, u'user_data': None, u'cleaned': False, u'vm_mode': None, u'deleted_at': None, u'reservation_id': u'r-2ivdfgiz', u'id': 101, u'security_groups': [{u'deleted_at': None,
 u'user_id': u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', u'description': u'default', u'deleted': False, u'created_at': u'2013-11-15T00:48:31.000000', u'updated_at': None, u'project_
id': u'49a55ed418d44af8b8104157045e8347', u'id': 90, u'name': u'default'}], u'disable_terminate': False, u'root_device_name': u'/dev/vda', u'display_name': u'scenario-server-
-tempest-1998782633', u'uuid': u'9dbd99d4-09d7-43df-b8de-c6e65043e012', u'default_swap_device': None, u'info_cache': {u'instance_uuid': u'9dbd99d4-09d7-43df-b8de-c6e65043e012
', u'deleted': False, u'created_at': u'2013-11-15T00:48:31.000000', u'updated_at': u'2013-11-15T00:49:18.000000', u'network_info': [{u'ovs_interfaceid': None, u'network': {u'
bridge': u'br100', u'label': u'private', u'meta': {u'tenant_id': None, u'should_create_bridge': True, u'bridge_interface': u'eth0'}, u'id': u'd00c22d4-05b0-4c71-86ba-1c5d60b4
45bd', u'subnets': [{u'ips': [{u'meta': {}, u'type': u'fixed', u'floating_ips': [{u'meta': {}, u'type': u'floating', u'version': 4, u'address': u'172.24.4.225'}], u'version':
 4, u'address': u'10.1.0.4'}], u'version': 4, u'meta': {u'dhcp_server': u'10.1.0.1'}, u'dns': [{u'meta': {}, u'type': u'dns', u'version': 4, u'address': u'8.8.4.4'}], u'route
s': [], u'cidr': u'10.1.0.0/24', u'gateway': {u'meta': {}, u'type': u'gateway', u'version': 4, u'address': u'10.1.0.1'}}, {u'ips': [], u'version': None, u'meta': {u'dhcp_serv
er': None}, u'dns': [], u'routes': [], u'cidr': None, u'gateway': {u'meta': {}, u'type': u'gateway', u'version': None, u'address': None}}]}, u'devname': None, u'qbh_params':
None, u'meta': {}, u'address': u'fa:16:3e:57:00:d4', u'type': u'bridge', u'id': u'c7cdc48e-2ef3-43b8-9d8f-88c644afac78', u'qbg_params': None}], u'deleted_at': None}, u'hostna
me': u'scenario-server--tempest-1998782633', u'launched_on': u'devstack-precise-check-rax-ord-658168.slave.openstack.org', u'display_description': u'scenario-server--tempest-
1998782633', u'key_data': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDWN4HLjQWmJu2prhyp8mSkcVOx3W4dhK6GB1L4upm83DU7Ogj3Tg2cTuMqmO4bIt3gJv+BZB16auiyq5w+SEK8VVSuTresc7dD5qW7dej+bD
aF6w/gLsEbP8s0rOvMo93esqF0Cwt7WyqpBXsRr8DEjdPDkJL9fRjFuuGz6sjpM9qAiKd7e1v37y+z39T2y7PoJA5241b0QDG5H6uHNdrCwxIaWxtX5+ac2kUJSxS7FjjtACPgsoBD0tltcpaEQaxmQANdAm4hkhe1rTpP7vfSrmEN
I0ZrwSjre2ZbWLA0IcM3JJwmsXWzXdPvjNC+GVqWmltugTNH77vOfwTbec+x Generated by Nova\n', u'deleted': False, u'config_drive': u'', u'power_state': 1, u'default_ephemeral_device': No
ne, u'progress': 0, u'project_id': u'49a55ed418d44af8b8104157045e8347', u'launched_at': u'2013-11-15T00:48:50.000000', u'scheduled_at': u'2013-11-15T00:48:31.000000', u'node'
: u'devstack-precise-check-rax-ord-658168.slave.openstack.org', u'ramdisk_id': u'258c4266-6e2b-4aca-ae56-b495ea8d8b5f', u'access_ip_v6': None, u'access_ip_v4': None, u'kernel
_id': u'47643b93-2bc5-49b3-8c2d-83791ee8a0e2', u'key_name': u'scenario-keypair--tempest-1179721946', u'updated_at': u'2013-11-15T00:49:16.000000', u'host': u'devstack-precise
-check-rax-ord-658168.slave.openstack.org', u'user_id': u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', u'system_metadata': {u'image_kernel_id': u'47643b93-2bc5-49b3-8c2d-83791ee8a0e2',
 u'instance_type_memory_mb': u'64', u'instance_type_swap': u'0', u'instance_type_vcpu_weight': None, u'instance_type_root_gb': u'0', u'instance_type_id': u'6', u'image_ramdis
k_id': u'258c4266-6e2b-4aca-ae56-b495ea8d8b5f', u'instance_type_name': u'm1.nano', u'instance_type_ephemeral_gb': u'0', u'instance_type_rxtx_factor': u'1.0', u'image_disk_for
mat': u'ami', u'instance_type_flavorid': u'42', u'image_container_format': u'ami', u'instance_type_vcpus': u'1', u'image_min_ram': u'0', u'image_min_disk': u'0', u'image_base
_image_ref': u'c9676f02-4f3e-4014-b79e-1252e37faa45'}, u'task_state': None, u'shutdown_terminate': False, u'cell_name': None, u'root_gb': 0, u'locked': False, u'name': u'inst
ance-00000065', u'created_at': u'2013-11-15T00:48:31.000000', u'locked_by': None, u'launch_index': 0, u'memory_mb': 64, u'vcpus': 1, u'image_ref': u'c9676f02-4f3e-4014-b79e-1
252e37faa45', u'architecture': None, u'auto_disk_config': False, u'os_type': None, u'metadata': {}}, u'volume_id': u'7caf0141-ec38-40a9-9860-f3275989d2c5'}, u'namespace': Non
e, u'_context_instance_lock_checked': False, u'_context_timestamp': u'2013-11-15T00:49:44.545660', u'_context_is_admin': False, u'version': u'2.0', u'_context_project_id': u'
49a55ed418d44af8b8104157045e8347', u'_context_user': u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', u'_unique_id': u'a906d8ed501b44b4a1324c846ae48c8d', u'_context_read_deleted': u'no',
 u'_context_user_id': u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', u'method': u'detach_volume', u'_context_remote_address': u'127.0.0.1'} _safe_log /opt/stack/new/nova/nova/openstack
/common/rpc/common.py:277
2013-11-15 00:49:45.035 29876 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: {'read_deleted': u'no', 'project_name': u'tempest.scenario.manager-tempest-1652099598-tenant', 'user_id': u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', 'roles': [u'_member_'], 'timestamp': u'2013-11-15T00:49:44.545660', 'auth_token': '<SANITIZED>', 'remote_address': u'127.0.0.1', 'quota_class': None, 'is_admin': False, 'user': u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', 'service_catalog': [{u'endpoints': [{u'adminURL': u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347', u'region': u'RegionOne', u'id': u'1a7219a8e4e543909f4b2a497810fa7c', u'serviceName': u'cinder', u'internalURL': u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347', u'publicURL': u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347'}], u'endpoints_links': [], u'type': u'volume', u'name': u'cinder'}], 'request_id': u'req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33', 'instance_lock_checked': False, 'project_id': u'49a55ed418d44af8b8104157045e8347', 'user_name': u'tempest.scenario.manager-tempest-1652099598-user', 'tenant': u'49a55ed418d44af8b8104157045e8347'} _safe_log /opt/stack/new/nova/nova/openstack/common/rpc/common.py:277
2013-11-15 00:49:45.036 DEBUG nova.openstack.common.rpc.amqp [req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33 tempest.scenario.manager-tempest-1652099598-user tempest.scenario.manager-tempest-1652099598-tenant] Making synchronous call on conductor ... multicall /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:553
2013-11-15 00:49:45.037 DEBUG nova.openstack.common.rpc.amqp [req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33 tempest.scenario.manager-tempest-1652099598-user tempest.scenario.manager-tempest-1652099598-tenant] MSG_ID is 4e48fe5fc3844f18903fc60cdac9545a multicall /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:556
2013-11-15 00:49:45.037 DEBUG nova.openstack.common.rpc.amqp [req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33 tempest.scenario.manager-tempest-1652099598-user tempest.scenario.manager-tempest-1652099598-tenant] UNIQUE_ID is 9fe3d2e29e364a1dbde5c4fba5185bae. _add_unique_id /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:341
2013-11-15 00:49:45.047 29876 DEBUG amqp [-] Closed channel #1 _do_close /usr/local/lib/python2.7/dist-packages/amqp/channel.py:95
2013-11-15 00:49:45.048 29876 DEBUG amqp [-] using channel_id: 1 __init__ /usr/local/lib/python2.7/dist-packages/amqp/channel.py:71
2013-11-15 00:49:45.050 29876 DEBUG amqp [-] Channel open _open_ok /usr/local/lib/python2.7/dist-packages/amqp/channel.py:429
2013-11-15 00:49:45.071 AUDIT nova.compute.manager [req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33 tempest.scenario.manager-tempest-1652099598-user tempest.scenario.manager-tempest-1652099598-tenant] [instance: 9dbd99d4-09d7-43df-b8de-c6e65043e012] Detach volume 7caf0141-ec38-40a9-9860-f3275989d2c5 from mountpoint /dev/vdb
2013-11-15 00:49:46.513 DEBUG nova.openstack.common.lockutils [req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33 tempest.scenario.manager-tempest-1652099598-user tempest.scenario.manager-tempest-1652099598-tenant] Got semaphore "connect_volume" lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:167
2013-11-15 00:49:46.513 DEBUG nova.openstack.common.lockutils [req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33 tempest.scenario.manager-tempest-1652099598-user tempest.scenario.manager-tempest-1652099598-tenant] Got semaphore / lock "disconnect_volume" inner /opt/stack/new/nova/nova/openstack/common/lockutils.py:247
2013-11-15 00:49:48.640 DEBUG nova.virt.libvirt.driver [req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33 tempest.scenario.manager-tempest-1652099598-user tempest.scenario.manager-tempest-1652099598-tenant] Connection to libvirt broke _test_connection /opt/stack/new/nova/nova/virt/libvirt/driver.py:699
2013-11-15 00:49:48.640 DEBUG nova.virt.libvirt.driver [req-0fdc657c-fdb3-4aef-96c6-c7d3c6f18b33 tempest.scenario.manager-tempest-1652099598-user tempest.scenario.manager-tempest-1652099598-tenant] Connecting to libvirt: qemu:///system _get_new_connection /opt/stack/new/nova/nova/virt/libvirt/driver.py:625
2013-11-15 00:49:48.641 29876 ERROR nova.virt.libvirt.driver [-] Connection to libvirt failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
2013-11-15 00:49:48.641 29876 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2013-11-15 00:49:48.641 29876 TRACE nova.virt.libvirt.driver File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 740, in _connect
2013-11-15 00:49:48.641 29876 TRACE nova.virt.libvirt.driver return libvirt.openAuth(uri, auth, flags)
2013-11-15 00:49:48.641 29876 TRACE nova.virt.libvirt.driver File "/usr/lib/python2.7/dist-packages/libvirt.py", line 102, in openAuth
2013-11-15 00:49:48.641 29876 TRACE nova.virt.libvirt.driver if ret is None:raise libvirtError('virConnectOpenAuth() failed')
2013-11-15 00:49:48.641 29876 TRACE nova.virt.libvirt.driver libvirtError: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
2013-11-15 00:49:48.641 29876 TRACE nova.virt.libvirt.driver

Tags: libvirt
Revision history for this message
Michael Still (mikal) wrote :
Revision history for this message
Matt Riedemann (mriedem) wrote :

Might be related to bug 1257644.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Nevermind, I don't think this is bug 1257644 (which is actually a dupe of bug 1255624), this has it's own signature. 5 hits in the last 2 weeks on this one:

message:"libvirtError: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused'" AND filename:"logs/screen-n-cpu.txt"

http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpcImxvZ3Mvc2NyZWVuLW4tY3B1LnR4dFwiIEFORCBtZXNzYWdlOlwibGlidmlydEVycm9yOiBGYWlsZWQgdG8gY29ubmVjdCBzb2NrZXQgdG8gJy92YXIvcnVuL2xpYnZpcnQvbGlidmlydC1zb2NrJzogQ29ubmVjdGlvbiByZWZ1c2VkJ1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJhbGwiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzkwODQzNTI0MTY3fQ==

Interestingly they only show up in the check queue, but that probably doesn't matter.

Joe Gordon (jogo)
Changed in nova:
status: New → Confirmed
Revision history for this message
Davanum Srinivas (DIMS) (dims-v) wrote :

I see only one hit and that one is in check-tempest-dsvm-neutron-full-icehouse, http://logs.openstack.org/14/120814/4/check/check-tempest-dsvm-neutron-full-icehouse/17641af/logs/libvirtd.txt.gz against the old libvirt version: 0.9.8

Changed in nova:
status: Confirmed → Invalid
Revision history for this message
Matt Riedemann (mriedem) wrote :
Changed in nova:
status: Invalid → Confirmed
Revision history for this message
Davanum Srinivas (DIMS) (dims-v) wrote :

Looks like there's a entry in syslog:
See http://logs.openstack.org/20/167020/1/check/check-tempest-dsvm-centos7/cd6c255/logs/syslog.txt.gz

which has an example for traceback:
http://logs.openstack.org/20/167020/1/check/check-tempest-dsvm-centos7/cd6c255/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-03-25_07_03_28_539

the start of the log looks like this

Mar 25 07:03:28 devstack-centos7-rax-iad-1553312.slave.openstack.org libvirtd[9811]: Caught Segmentation violation dumping internal log buffer:
Mar 25 07:03:28 devstack-centos7-rax-iad-1553312.slave.openstack.org libvirtd[9811]: ====== start of log =====
Mar 25 07:03:28 devstack-centos7-rax-iad-1553312.slave.openstack.org libvirtd[9811]: 2015-03-25 07:03:28.107+0000: 9811: debug : virEventPollDispatchHandles:482 : i=3 w=4
Mar 25 07:03:28 devstack-centos7-rax-iad-1553312.slave.openstack.org libvirtd[9811]: 2015-03-25 07:03:28.107+0000: 9811: debug : virEventPollDispatchHandles:482 : i=4 w=5
Mar 25 07:03:28 devstack-centos7-rax-iad-1553312.slave.openstack.org libvirtd[9811]: 2015-03-25 07:03:28.107+0000: 9811: debug : virEventPollDispatchHandles:482 : i=5 w=6
Mar 25 07:03:28 devstack-centos7-rax-iad-1553312.slave.openstack.org libvirtd[9811]: 2015-03-25 07:03:28.107+0000: 9811: debug : virEventPollDispatchHandles:482 : i=7 w=8
.......
.......

Changed in nova:
importance: Undecided → Medium
Revision history for this message
Matt Riedemann (mriedem) wrote :

Haven't seen this in a long time, so marking invalid.

Changed in nova:
status: Confirmed → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.