cinder volume manager: Error encountered during initialization of driver: LVMISCSIDriver

Bug #1394179 reported by Kashyap Chamarthy on 2014-11-19
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Undecided
Unassigned

Bug Description

Noticed this on a new setup with these commits in DevStack and Cinder:
--------

$ git log | head -5
commit dcffc70b67ab26844dc76780d33adaf4d40b0bd8
Merge: 5477b08 1f26c0e
Author: Jenkins <email address hidden>
Date: Wed Nov 19 05:28:35 2014 +0000
--------

(2) Cinder:
--------
 cinder]$ git log | head -5
commit c0fb19e96ee4c22212fd72ec44933ce282dbe014
Merge: 700bf69 71f502e
Author: Jenkins <email address hidden>
Date: Tue Nov 18 23:24:31 2014 +0000
--------

Error:
--------
.
.
.
2014-11-19 07:23:25.902 DEBUG cinder.openstack.common.service [-] ******************************************************************************** fro
m (pid=7953) log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:2002
2014-11-19 07:23:26.022 DEBUG cinder.openstack.common.processutils [req-eba23a1b-06ea-4ba4-b6c8-6d212612b66f None None] Result was 5 from (pid=8081) e
xecute /opt/stack/cinder/cinder/openstack/common/processutils.py:192
2014-11-19 07:23:26.025 ERROR cinder.volume.manager [req-eba23a1b-06ea-4ba4-b6c8-6d212612b66f None None] Error encountered during initialization of driver: LVMISCSIDriver
2014-11-19 07:23:26.027 ERROR cinder.volume.manager [req-eba23a1b-06ea-4ba4-b6c8-6d212612b66f None None] Unexpected error while running command.
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o name stack-volumes-lvmdriver-1
Exit code: 5
Stdout: u''
Stderr: u' Volume group "stack-volumes-lvmdriver-1" not found\n Skipping volume group stack-volumes-lvmdriver-1\n'
2014-11-19 07:23:26.027 TRACE cinder.volume.manager Traceback (most recent call last):
2014-11-19 07:23:26.027 TRACE cinder.volume.manager File "/opt/stack/cinder/cinder/volume/manager.py", line 274, in init_host
2014-11-19 07:23:26.027 TRACE cinder.volume.manager self.driver.check_for_setup_error()
2014-11-19 07:23:26.027 TRACE cinder.volume.manager File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2014-11-19 07:23:26.027 TRACE cinder.volume.manager return f(*args, **kwargs)
2014-11-19 07:23:26.027 TRACE cinder.volume.manager File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 84, in check_for_setup_error
2014-11-19 07:23:26.027 TRACE cinder.volume.manager executor=self._execute)
2014-11-19 07:23:26.027 TRACE cinder.volume.manager File "/opt/stack/cinder/cinder/brick/local_dev/lvm.py", line 83, in __init__
2014-11-19 07:23:26.027 TRACE cinder.volume.manager if self._vg_exists() is False:
2014-11-19 07:23:26.027 TRACE cinder.volume.manager File "/opt/stack/cinder/cinder/brick/local_dev/lvm.py", line 108, in _vg_exists
2014-11-19 07:23:26.027 TRACE cinder.volume.manager self.vg_name, root_helper=self._root_helper, run_as_root=True)
2014-11-19 07:23:26.027 TRACE cinder.volume.manager File "/opt/stack/cinder/cinder/utils.py", line 142, in execute
2014-11-19 07:23:26.027 TRACE cinder.volume.manager return processutils.execute(*cmd, **kwargs)
2014-11-19 07:23:26.027 TRACE cinder.volume.manager File "/opt/stack/cinder/cinder/openstack/common/processutils.py", line 200, in execute
2014-11-19 07:23:26.027 TRACE cinder.volume.manager cmd=sanitized_cmd)
2014-11-19 07:23:26.027 TRACE cinder.volume.manager ProcessExecutionError: Unexpected error while running command.
2014-11-19 07:23:26.027 TRACE cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o name stack-volumes-lvmdriver-1
2014-11-19 07:23:26.027 TRACE cinder.volume.manager Exit code: 5
2014-11-19 07:23:26.027 TRACE cinder.volume.manager Stdout: u''
2014-11-19 07:23:26.027 TRACE cinder.volume.manager Stderr: u' Volume group "stack-volumes-lvmdriver-1" not found\n Skipping volume group stack-volumes-lvmdriver-1\n'
2014-11-19 07:23:26.027 TRACE cinder.volume.manager
.
.
.
--------

Some command outputs
====================

Running Cinder services:
--------
$ ps -ef | grep -i cinder
kashyapc 7950 7851 0 07:23 pts/16 00:00:03 python /opt/stack/cinder/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf
kashyapc 7951 7835 1 07:23 pts/15 00:00:15 python /opt/stack/cinder/bin/cinder-api --config-file /etc/cinder/cinder.conf
kashyapc 7953 7879 1 07:23 pts/18 00:00:15 python /opt/stack/cinder/bin/cinder-volume --config-file /etc/cinder/cinder.conf
kashyapc 7954 7864 0 07:23 pts/17 00:00:03 python /opt/stack/cinder/bin/cinder-backup --config-file /etc/cinder/cinder.conf
kashyapc 8081 7953 0 07:23 pts/18 00:00:01 python /opt/stack/cinder/bin/cinder-volume --config-file /etc/cinder/cinder.conf
kashyapc 8086 7951 0 07:23 pts/15 00:00:00 python /opt/stack/cinder/bin/cinder-api --config-file /etc/cinder/cinder.conf
kashyapc 8087 7951 0 07:23 pts/15 00:00:00 python /opt/stack/cinder/bin/cinder-api --config-file /etc/cinder/cinder.conf
kashyapc 8088 7951 0 07:23 pts/15 00:00:00 python /opt/stack/cinder/bin/cinder-api --config-file /etc/cinder/cinder.conf
kashyapc 8090 7951 0 07:23 pts/15 00:00:00 python /opt/stack/cinder/bin/cinder-api --config-file /etc/cinder/cinder.conf
--------

Cinder services:
--------
$ cinder service-list
+------------------+---------------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+---------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | devstack-numa | nova | enabled | up | 2014-11-19T12:33:06.000000 | None |
| cinder-scheduler | devstack-numa | nova | enabled | up | 2014-11-19T12:33:06.000000 | None |
| cinder-volume | devstack-numa@lvmdriver-1 | nova | enabled | up | 2014-11-19T12:33:06.000000 | None |
+------------------+---------------------------+------+---------+-------+----------------------------+-----------------+

--------
$ cinder credentials
+------------------+-----------------------------------------------+
| User Credentials | Value |
+------------------+-----------------------------------------------+
| id | fe9c37a0b57a4c2daa341bc29fdf3172 |
| name | admin |
| roles | [{u'name': u'_member_'}, {u'name': u'admin'}] |
| roles_links | [] |
| username | admin |
+------------------+-----------------------------------------------+
+-----------+---------------------------------------------------------------------------------------------------------+
| Token | Value |
+-----------+---------------------------------------------------------------------------------------------------------+
| audit_ids | [u'8E91rxihRTix-EocalCJyA'] |
| expires | 2014-11-19T13:42:32Z |
| id | 0c9ac000602b47368317742e909e2eda |
| issued_at | 2014-11-19T12:42:32.820342 |
| tenant | {u'id': u'a69100c3bb3949799a4102e746d46bc9', u'enabled': True, u'description': None, u'name': u'admin'} |
+-----------+---------------------------------------------------------------------------------------------------------+

Info about Logical volumes:
--------
$ sudo lvs
  LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  root fedora_ip-192-169-142-68 -wi-ao---- 35.47g
  swap fedora_ip-192-169-142-68 -wi-ao---- 4.00g
--------

Jay Bryant (jsbryant) wrote :

The error reported just means that the expected LVM volume group stack-volumes-lvmdriver-1 couldn't be found. What is the output of 'vgs' ? Are there any errors from devstack trying to create the volume group? Insufficient space?

Changed in cinder:
status: New → Incomplete
Kashyap Chamarthy (kashyapc) wrote :

Output of 'vgs':
--------
$ sudo vgs
  VG #PV #LV #SN Attr VSize VFree
  fedora_ip-192-169-142-68 1 2 0 wz--n- 39.51g 40.00m
--------

From this contextual log for volume group creation in DevStack, it says Volume group successfully created:
--------
.
.
.
2014-11-19 09:46:38.728 | + _create_cinder_volume_group stack-volumes-lvmdriver-1 /home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file
2014-11-19 09:46:38.729 | + local vg_name=stack-volumes-lvmdriver-1
2014-11-19 09:46:38.729 | + local backing_file=/home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file
2014-11-19 09:46:38.729 | + sudo vgs stack-volumes-lvmdriver-1
2014-11-19 09:46:38.820 | Volume group "stack-volumes-lvmdriver-1" not found
2014-11-19 09:46:38.820 | Skipping volume group stack-volumes-lvmdriver-1
2014-11-19 09:46:38.852 | + '[' -z '' ']'
2014-11-19 09:46:38.853 | + [[ -f /home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file ]]
2014-11-19 09:46:38.853 | + truncate -s 24G /home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file
2014-11-19 09:46:38.872 | ++ sudo losetup -f --show /home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file
2014-11-19 09:46:38.956 | + local vg_dev=/dev/loop0
2014-11-19 09:46:38.957 | + sudo vgs stack-volumes-lvmdriver-1
2014-11-19 09:46:39.037 | Volume group "stack-volumes-lvmdriver-1" not found
2014-11-19 09:46:39.037 | Skipping volume group stack-volumes-lvmdriver-1
2014-11-19 09:46:39.069 | + sudo vgcreate stack-volumes-lvmdriver-1 /dev/loop0
2014-11-19 09:46:39.201 | Physical volume "/dev/loop0" successfully created
2014-11-19 09:46:39.235 | Volume group "stack-volumes-lvmdriver-1" successfully created
.
.
.
--------

It's a VM, I see there's enough space for a 'tiny' flavoured Nova instance:

    $ nova boot --image cirros-0.3.1-x86_64-disk --flavor m1.tiny cirrvm5

Isn't this space sufficient for a tiny guest?

$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/fedora_ip--192--169--142--68-root ext4 35G 3.5G 30G 11% /
devtmpfs devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs tmpfs 3.9G 356K 3.9G 1% /run
tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs tmpfs 3.9G 8.0K 3.9G 1% /tmp
/dev/vda1 ext4 477M 94M 354M 21% /boot
tmpfs tmpfs 780M 0 780M 0% /run/user/0

(I had successful DevStack runs on this very same virtual machine previously.)

Kashyap Chamarthy (kashyapc) wrote :

Output of 'vgs':
--------
$ sudo vgs
  VG #PV #LV #SN Attr VSize VFree
  fedora_ip-192-169-142-68 1 2 0 wz--n- 39.51g 40.00m
--------

From this contextual log for volume group creation in DevStack, it says Volume group successfully created:
--------
.
.
.
2014-11-19 09:46:38.728 | + _create_cinder_volume_group stack-volumes-lvmdriver-1 /home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file
2014-11-19 09:46:38.729 | + local vg_name=stack-volumes-lvmdriver-1
2014-11-19 09:46:38.729 | + local backing_file=/home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file
2014-11-19 09:46:38.729 | + sudo vgs stack-volumes-lvmdriver-1
2014-11-19 09:46:38.820 | Volume group "stack-volumes-lvmdriver-1" not found
2014-11-19 09:46:38.820 | Skipping volume group stack-volumes-lvmdriver-1
2014-11-19 09:46:38.852 | + '[' -z '' ']'
2014-11-19 09:46:38.853 | + [[ -f /home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file ]]
2014-11-19 09:46:38.853 | + truncate -s 24G /home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file
2014-11-19 09:46:38.872 | ++ sudo losetup -f --show /home/kashyapc/src/cloud/data/stack-volumes-lvmdriver-1-backing-file
2014-11-19 09:46:38.956 | + local vg_dev=/dev/loop0
2014-11-19 09:46:38.957 | + sudo vgs stack-volumes-lvmdriver-1
2014-11-19 09:46:39.037 | Volume group "stack-volumes-lvmdriver-1" not found
2014-11-19 09:46:39.037 | Skipping volume group stack-volumes-lvmdriver-1
2014-11-19 09:46:39.069 | + sudo vgcreate stack-volumes-lvmdriver-1 /dev/loop0
2014-11-19 09:46:39.201 | Physical volume "/dev/loop0" successfully created
2014-11-19 09:46:39.235 | Volume group "stack-volumes-lvmdriver-1" successfully created
.
.
.
--------

It's a VM, I see there's enough space for a 'tiny' flavoured Nova instance:

    $ nova boot --image cirros-0.3.1-x86_64-disk --flavor m1.tiny cirrvm5

Isn't this space sufficient for a tiny guest?

$ df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/fedora_ip--192--169--142--68-root ext4 35G 3.5G 30G 11% /
devtmpfs devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs tmpfs 3.9G 356K 3.9G 1% /run
tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs tmpfs 3.9G 8.0K 3.9G 1% /tmp
/dev/vda1 ext4 477M 94M 354M 21% /boot
tmpfs tmpfs 780M 0 780M 0% /run/user/0

(I had successful DevStack runs on this very same virtual machine previously.)

Changed in cinder:
status: Incomplete → New
Jay Bryant (jsbryant) wrote :

Strange. So, devstack thinks it created the volume group but it is not showing up as a VG in your vgs output? Did you maybe reboot your VM after running devstack, losing the mounted /dev/loop0 device? What do you get from losetup -a ?

Kashyap Chamarthy (kashyapc) wrote :

[Oops, my previous comment got added twice.]

Jay, you're right: I indeed reboot the DevStack instance, and did ./rejoin-stack.sh. I just did it again:

 - Reboot the VM
 - SELinux in Permissive mode
 - Rejoin the Stack service: $ ./rejoin-stack.sh

   Fails with the same stack trace (looking at the c-api `screen` window) as noted in Description.

And yep, seems like it lost the mounted loop device:

  $ losetup -a
  $

When I'm not using DevStack, I create loop device by hand, something like:

  $ dd if=/dev/zero of=/cinder-volumes bs=1 count=0 seek=5G
  $ losetup -fv /cinder-volumes
  $ losetup -l
  $ vgcreate cinder-volumes /dev/loop0
  $ vgdisplay cinder-volumes

Probably I should do something like that in this case.

Jay, feel free to close this if you think DevStack shouldn't do it, when you rejoin-stack.sh. Thanks for helping to debug this.

Jay Bryant (jsbryant) wrote :

Kashyap,

You are welcome. I have hit the same problem before. This isn't really a bug as it isn't rejoin-stack's job to set it up every time. Once it has been set up once, it is really more of a Linux. If yu want it to be persistent you need to make sure that Linux sets it up at boot time.

At this point, I am going to close this. If it something that really bothers you it might be worth looking to see if rejoin-stack can be improved to resolve the situation or at least throw a warning message out.

Changed in cinder:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers