Volume creation fails randomly - Failed to create iscsi target for volume id:volume-<UUID>. Please ensure your tgtd config file contains 'include /var/lib/cinder/volumes/*'
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Invalid
|
Undecided
|
Unassigned |
Bug Description
ACTUAL BEHAVIOR: With already 300 volumes in play, the next set of 50 instances + volumes provisioning fails on volume creation. While creating a set of 50 volumes in a row, one of the volumes creation fails, but continues on creating the remaining. The error message puzzles as it reports missing configuration file. If you notice this log, the volumes were being created successfully before and after the error. So something relates to timing:
2013-06-14 15:04:58 INFO [cinder.
2013-06-14 15:04:58 INFO [cinder.
2013-06-14 15:04:58 INFO [cinder.
2013-06-14 15:04:58 INFO [cinder.
2013-06-14 15:04:58 ERROR [cinder.
2013-06-14 15:04:58 ERROR [cinder.
2013-06-14 15:04:58 ERROR [cinder.
Traceback (most recent call last):
File "/usr/lib/
rval = self.proxy.
File "/usr/lib/
return getattr(proxyobj, method)(ctxt, **kwargs)
File "/usr/lib/
LOG.
File "/usr/lib/
self.gen.next()
File "/usr/lib/
model_update = self.driver.
File "/usr/lib/
chap_auth)
File "/usr/lib/
raise exception.
NotFound: Resource could not be found.
2013-06-14 15:04:59 INFO [cinder.
2013-06-14 15:04:59 INFO [cinder.
2013-06-14 15:04:59 INFO [cinder.
2013-06-14 15:04:59 INFO [cinder.
2013-06-14 15:04:59 INFO [cinder.
2013-06-14 15:04:59 INFO [cinder.
2013-06-14 15:04:59 INFO [cinder.
2013-06-14 15:04:59 INFO [cinder.
The error do surface even after specifying the absolute path in /etc/tgt/
Change content of the /etc/tgt/
EXPECTED BEHAVIOR: The volume creation must either error out on all or work successfully for all. The reason being the respective configuration file exist and set correctly.
HOW-TO-REPRODUCE:
Set up OpenStack grizzly environment using Install openstack using https:/
Changed the metadatasize of the physical disk of cinder volume group to 1020 to accomodate 300+ cinder volumes.
Launching a provisioning job that creates 50 instances and 50 disks (1 GB each) and attaches in 1 batch.
ENVIRONMENT: (Hardware, OS, OS Version, Browser, etc)
Cinder is deployed in the simplest mode. 1 cinder volume group, no scheduler no volume type feature is utilized.
Changed in cinder: | |
status: | New → Invalid |
I'm not sure why you would expect "if one fails they all should fail" ?