upload-to-image process stops when restarting Openstack services
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Description of problem:
I've started a process of upload a volume from Cinder to an image in Glance with the command:
# cinder upload-to-image ...
Then in the middle of the process the I've restarted the Openstacks services.
Both the volume in Cinder and the image create in Glance are stuck in status of "uploading" and "saving".
but the process itself has halted.
Version-Release number of selected component (if applicable):
python-
python-
python-
openstack-
python-
openstack-
How reproducible:
100%
Steps to Reproduce:
1. Run the command:
# cinder upload-to-image <volume-id> --container-format bare --disk-format raw <image name>
2. Run the command:
# openstack-service restart
3. check whether the image is been saved in the store.
Actual results:
the image-upload process is stuck and both volume and image statuses are "uploading" and "save".
Expected results:
The process should restart itself from the beginning or continue from the place it has stopped.
I think this is NOT a bug yet or it should be generalized to the issue of the task reconcilation. You may take a look at https:/ /wiki.openstack .org/TaskFlow.
Check the explantions like "Currently many of the OpenStack components do not handle this forced stop in a way that leaves the state of the system in a reconcilable state. Typically the actions that a service was actively doing are immediately forced to stop and can not be resumed and are in a way forgotten (a later scanning process may attempt to clean up these orphaned resources) " and "TaskFlow will help in this situation by tracking the actions, tasks, and there associated states so that when the service is restarted (even after the services software is upgraded) the service can easily resume (or rollback) the tasks that were interrupted when the stop/kill command was triggered".
OpenStack projects are on their way to refactor with the taskflow. After this is done, it will be possible to make the task recoverable.