Snapshots fail to upload larger (~30G+) images, with error '500 Internal Server Error Failed to upload image'

Bug #1407592 reported by Michael Zhdanov
24
This bug affects 5 people
Affects Status Importance Assigned to Milestone
Glance
Invalid
Undecided
Unassigned
OpenStack Compute (nova)
Invalid
Undecided
Unassigned
OpenStack Identity (keystone)
Won't Fix
Undecided
Unassigned

Bug Description

I use Ubuntu 14.04.1 and OpenStack Juno latest release.

In case I do SNAPSHOT for small virtual machine - it works. In case I do SNAPSHOT for a big size VM (more than 30G) it fails after starting of saving SNAPSHOT to a Glance storage.

So if to summarize:
- could be created SNAPSHOTS for small VMs but not for a big
- it is enough space on Glance storage and on root partitions on all involved physical machines

The following errors could be found in logs:
/var/log/nova/nova-compute.log
2015-01-05 11:14:56.745 1767 INFO nova.compute.resource_tracker [-] Compute_service record updated for node4:node4
2015-01-05 11:15:32.157 1767 ERROR glanceclient.common.http [req-692621ba-7a6b-471a-9e16-d74ee2eb875d ] Request returned failure status 500.
2015-01-05 11:15:32.158 1767 WARNING urllib3.connectionpool [req-692621ba-7a6b-471a-9e16-d74ee2eb875d ] HttpConnectionPool is full, discarding connection: cloud
2015-01-05 11:15:34.479 1767 ERROR glanceclient.common.http [req-692621ba-7a6b-471a-9e16-d74ee2eb875d ] Request returned failure status 401.
2015-01-05 11:15:34.480 1767 ERROR nova.compute.manager [req-692621ba-7a6b-471a-9e16-d74ee2eb875d None] [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] Error while trying to clean up image 71cd0c84-fa54-4d35-9ad6-63e36969de1c
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] Traceback (most recent call last):
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 375, in decorated_function
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] self.image_api.delete(context, image_id)
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] File "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 137, in delete
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] return session.delete(context, image_id)
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 418, in delete
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] self._client.call(context, 1, 'delete', image_id)
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 232, in call
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] return getattr(client.images, method)(*args, **kwargs)
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 255, in delete
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] resp, body = self.client.delete(url)
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 265, in delete
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] return self._request('DELETE', url, **kwargs)
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 221, in _request
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] raise exc.from_response(resp, resp.content)
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] HTTPUnauthorized: <html>
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] <head>
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] <title>401 Unauthorized</title>
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] </head>
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] <body>
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] <h1>401 Unauthorized</h1>
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] This server could not verify that you are authorized to access the document you requested. Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required.<br /><br />
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d]
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] </body>
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d] </html> (HTTP 401)
2015-01-05 11:15:34.480 1767 TRACE nova.compute.manager [instance: f5a232a4-a7b2-4858-b793-38936b0f7a4d]
2015-01-05 11:15:34.876 1767 INFO nova.scheduler.client.report [req-692621ba-7a6b-471a-9e16-d74ee2eb875d None] Compute_service record updated for ('node4', 'node4')
2015-01-05 11:15:34.879 1767 ERROR oslo.messaging.rpc.dispatcher [req-692621ba-7a6b-471a-9e16-d74ee2eb875d ] Exception during message handling: <html>
 <head>
  <title>500 Internal Server Error</title>
 </head>
 <body>
  <h1>500 Internal Server Error</h1>
  Failed to upload image 71cd0c84-fa54-4d35-9ad6-63e36969de1c<br /><br />

 </body>
</html> (HTTP 500)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher payload)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher return f(self, context, *args, **kw)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, in decorated_function
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher pass
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 289, in decorated_function
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 331, in decorated_function
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher kwargs['instance'], e, sys.exc_info())
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 319, in decorated_function
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher return function(self, context, *args, **kwargs)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 379, in decorated_function
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher instance=instance)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 369, in decorated_function
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher *args, **kwargs)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3027, in snapshot_instance
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher task_states.IMAGE_SNAPSHOT)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3058, in _snapshot_instance
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher update_task_state)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1752, in snapshot
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher image_file)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 126, in update
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher purge_props=purge_props)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 405, in update
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher _reraise_translated_image_exception(image_id)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 403, in update
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher image_id, **image_meta)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 232, in call
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher return getattr(client.images, method)(*args, **kwargs)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 329, in update
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher resp, body = self.client.put(url, headers=hdrs, data=image_data)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 259, in put
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher return self._request('PUT', url, **kwargs)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 221, in _request
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher raise exc.from_response(resp, resp.content)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher HTTPInternalServerError: <html>
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher <head>
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher <title>500 Internal Server Error</title>
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher </head>
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher <body>
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher <h1>500 Internal Server Error</h1>
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher Failed to upload image 71cd0c84-fa54-4d35-9ad6-63e36969de1c<br /><br />
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher </body>
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher </html> (HTTP 500)
2015-01-05 11:15:34.879 1767 TRACE oslo.messaging.rpc.dispatcher
2015-01-05 11:15:55.562 1767 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources

/var/log/glance/api.log
2015-01-05 11:15:20.569 19103 INFO glance.wsgi.server [-] 192.168.7.215 - - [05/Jan/2015 11:15:20] "HEAD /v1/images/2403f983-c6e2-4de1-9e56-dec0cbc1c203 HTTP/1.1" 200 837 0.014521
2015-01-05 11:15:32.054 19103 INFO glance.registry.client.v1.client [-] Registry client request PUT /images/71cd0c84-fa54-4d35-9ad6-63e36969de1c raised NotAuthenticated
2015-01-05 11:15:32.054 19103 ERROR glance.api.v1.upload_utils [-] Failed to upload image 71cd0c84-fa54-4d35-9ad6-63e36969de1c
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils Traceback (most recent call last):
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line 159, in upload_data_to_store
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils from_state='saving')
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/registry/client/v1/api.py", line 169, in update_image_metadata
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils from_state=from_state)
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/registry/client/v1/client.py", line 198, in update_image
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils headers=headers)
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/registry/client/v1/client.py", line 130, in do_request
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils 'exc_name': exc_name})
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/openstack/common/excutils.py", line 82, in __exit__
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils six.reraise(self.type_, self.value, self.tb)
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/registry/client/v1/client.py", line 115, in do_request
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils **kwargs)
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 71, in wrapped
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils return func(self, *args, **kwargs)
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 373, in do_request
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils headers=copy.deepcopy(headers))
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 85, in wrapped
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils return func(self, method, url, body, headers)
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 516, in _do_request
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils raise exception.NotAuthenticated(res.read())
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils NotAuthenticated: Authentication required
2015-01-05 11:15:32.054 19103 TRACE glance.api.v1.upload_utils
2015-01-05 11:15:32.152 19103 INFO glance.registry.client.v1.client [-] Registry client request PUT /images/71cd0c84-fa54-4d35-9ad6-63e36969de1c raised NotAuthenticated
2015-01-05 11:15:32.152 19103 ERROR glance.api.v1.upload_utils [-] Unable to kill image 71cd0c84-fa54-4d35-9ad6-63e36969de1c:
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils Traceback (most recent call last):
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line 77, in safe_kill
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils _kill(req, image_id, from_state)
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py", line 62, in _kill
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils from_state=from_state)
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/registry/client/v1/api.py", line 169, in update_image_metadata
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils from_state=from_state)
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/registry/client/v1/client.py", line 198, in update_image
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils headers=headers)
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/registry/client/v1/client.py", line 130, in do_request
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils 'exc_name': exc_name})
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/openstack/common/excutils.py", line 82, in __exit__
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils six.reraise(self.type_, self.value, self.tb)
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/registry/client/v1/client.py", line 115, in do_request
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils **kwargs)
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 71, in wrapped
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils return func(self, *args, **kwargs)
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 373, in do_request
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils headers=copy.deepcopy(headers))
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 85, in wrapped
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils return func(self, method, url, body, headers)
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 516, in _do_request
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils raise exception.NotAuthenticated(res.read())
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils NotAuthenticated: Authentication required
2015-01-05 11:15:32.152 19103 TRACE glance.api.v1.upload_utils
2015-01-05 11:15:32.154 19103 INFO glance.wsgi.server [-] 192.168.7.219 - - [05/Jan/2015 11:15:32] "PUT /v1/images/71cd0c84-fa54-4d35-9ad6-63e36969de1c HTTP/1.1" 500 407 700.615980
2015-01-05 11:15:34.463 19103 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): cloud
2015-01-05 11:15:34.476 19103 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-01-05 11:15:34.476 19103 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-01-05 11:15:34.477 19103 INFO keystonemiddleware.auth_token [-] Invalid user token - deferring reject downstream
2015-01-05 11:15:34.477 19103 INFO glance.wsgi.server [-] 192.168.7.219 - - [05/Jan/2015 11:15:34] "DELETE /v1/images/71cd0c84-fa54-4d35-9ad6-63e36969de1c HTTP/1.1" 401 485 0.015917

Revision history for this message
Matt Riedemann (mriedem) wrote :

The glance logs are showing auth issues, have you checked into that? Maybe it's timing out and invalidating your token? What are your token expiration times set to?

What version of python-glanceclient are you using? And what's the backing glance store? Just the default file system or some other backend?

Changed in nova:
status: New → Incomplete
Revision history for this message
Matt Riedemann (mriedem) wrote :

Try tracing when the snapshot request starts in the nova-api and glance-api logs and then see how long it takes before it fails, then check if your token lifetime is expiring before that, there is a config setting in keystone.conf for that I think.

Revision history for this message
Matt Riedemann (mriedem) wrote :

Also, this might be an indication of another bug:

"2015-01-05 11:15:32.158 1767 WARNING urllib3.connectionpool [req-692621ba-7a6b-471a-9e16-d74ee2eb875d ] HttpConnectionPool is full, discarding connection: cloud"

See python-glanceclient bug 1341777.

Revision history for this message
Michael Zhdanov (zhdan) wrote :

"The glance logs are showing auth issues, have you checked into that? Maybe it's timing out and invalidating your token? What are your token expiration times set to?" - I have checked auth and it is correct as other images can be uploaded and saved. Token expiration value was increased in two times in comparison with default value.

"What version of python-glanceclient are you using?" -
Package: python-glanceclient
Status: install ok installed
Priority: extra
Section: python
Installed-Size: 332
Maintainer: Chuck Short <email address hidden>
Architecture: all
Version: 1:0.14.0-0ubuntu1~cloud0
Replaces: glance-client
Depends: python-babel (>= 1.3), python-keystoneclient (>= 1:0.9.0), python-openssl (>= 0.11), python-prettytable (>= 0.7), python-requests, python-setuptools, python-warlock (>= 1.0.1), python-six (>= 1.7.0), python (>= 2.7), python (<< 2.8), python:any (>= 2.7.1-0ubuntu2), python-pbr

Package: nova-api
Status: install ok installed
Priority: extra
Section: net
Installed-Size: 421
Maintainer: Ubuntu Developers <email address hidden>
Architecture: all
Source: nova
Version: 1:2014.2.1-0ubuntu1~cloud0
Depends: iptables, nova-common (= 1:2014.2.1-0ubuntu1~cloud0), sysv-rc (>= 2.88dsf-24) | file-rc (>= 0.8.16), python:any

Package: glance-api
Status: install ok installed
Priority: extra
Section: python
Installed-Size: 146
Maintainer: Ubuntu OpenStack <email address hidden>
Architecture: all
Source: glance
Version: 1:2014.2.1-0ubuntu1~cloud0
Replaces: glance (<< 2012.1~e1~20110919.1021-0ubuntu2)
Depends: glance-common (= 1:2014.2.1-0ubuntu1~cloud0), sysv-rc (>= 2.88dsf-24) | file-rc (>= 0.8.16), python:any
Recommends: python-glanceclient

"And what's the backing glance store?" - default file system

Changed in nova:
status: Incomplete → New
Revision history for this message
Michael Zhdanov (zhdan) wrote :

Also after this error I can see necessary SNAPSHOT in status Saving in my dashboard, but this doesn't exist on file system and could be remved from dashboard by pressing Delete image....

Revision history for this message
hzrandd (82433422-a) wrote :

it is too big to update successfully before the token expiration. make sure the host disk is enough.

Revision history for this message
Michael Zhdanov (zhdan) wrote :

All values in keystone related to expiration were increased in 2 times - no result. I mentioned that all disks on all machines has enough free space.

Revision history for this message
Joe Gordon (jogo) wrote :

Although this sounds like a real bug, unclear if this is a nova bug, looks more like a glance bug

Changed in nova:
status: New → Incomplete
Revision history for this message
Lance Bragstad (lbragstad) wrote :

Hi Michael,

When you tested this, what were the exact values you set for the token expiration? The default token expiration is 1 hour, which might need to be adjusted depending on how long the operation takes.

https://github.com/openstack/keystone/blob/e54c32399b2e80ed729e906154c50cc11cd923c2/etc/keystone.conf.sample#L1474

Revision history for this message
Dolph Mathews (dolph) wrote :

Marking as incomplete in Glance because I suspect it's an issue with Keystone tokens expiring, and marking as incomplete in Keystone because it's not clear how long the snapshot takes or how what exactly the keystone token lifespan is (see comment #9).

Changed in keystone:
status: New → Incomplete
Changed in glance:
status: New → Incomplete
Revision history for this message
Michael Zhdanov (zhdan) wrote :
Download full text (9.9 KiB)

Hi All

All values of token expiration in keystone.conf were increased in 2-3 times in comparison with default values. (services of OpenStack were restarted after it). For example: expiration=5000, expiration_time=3600 and so on...

I do SNAPSHOT of VM (disk size: ~60Gb with 35Gb used space) (SNAPSHOT compression enabled). Start time - 11.03... then (after ~ 45 minutes):

/var/log/glance/api.log
2015-01-31 11:48:12.388 1504 INFO glance.wsgi.server [-] 192.168.7.215 - - [31/Jan/2015 11:48:12] "HEAD /v1/images/983b6452-d9aa-4224-a8a4-6f942e31ef76 HTTP/1.1" 200 1619 0.016105
2015-01-31 11:48:30.398 1504 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): jcloud
2015-01-31 11:48:30.412 1504 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-01-31 11:48:30.412 1504 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-01-31 11:48:30.413 1504 INFO keystonemiddleware.auth_token [-] Invalid user token - deferring reject downstream
2015-01-31 11:48:32.163 1504 INFO glance.wsgi.server [-] 192.168.7.215 - - [31/Jan/2015 11:48:32] "GET /v1/images/detail?sort_key=created_at&sort_dir=desc&limit=1000 HTTP/1.1" 200 20964 0.077386

and finally:

2015-01-31 11:55:53.200 1504 INFO glance.wsgi.server [-] 192.168.7.218 - - [31/Jan/2015 11:55:53] "PUT /v1/images/983b6452-d9aa-4224-a8a4-6f942e31ef76 HTTP/1.1" 401 485 442.797256
2015-01-31 11:55:55.363 1503 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): jcloud
2015-01-31 11:55:55.379 1503 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-01-31 11:55:55.379 1503 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-01-31 11:55:55.380 1503 INFO keystonemiddleware.auth_token [-] Invalid user token - deferring reject downstream
2015-01-31 11:55:55.381 1503 INFO glance.wsgi.server [-] 192.168.7.218 - - [31/Jan/2015 11:55:55] "DELETE /v1/images/983b6452-d9aa-4224-a8a4-6f942e31ef76 HTTP/1.1" 401 485 0.019435

Plus in /var/log/nova/nova-compute.log at this time (after ~ 45 minutes):
2015-01-31 11:55:03.359 1789 INFO nova.compute.resource_tracker [-] Compute_service record updated for jcnode3:jcnode3
2015-01-31 11:55:53.175 1789 ERROR glanceclient.common.http [req-923ed9a0-dda2-4c89-9ad2-b851f84a33d6 ] Request returned failure statu s 401.
2015-01-31 11:55:53.176 1789 WARNING urllib3.connectionpool [req-923ed9a0-dda2-4c89-9ad2-b851f84a33d6 ] HttpConnectionPool is full, di scarding connection: jcloud
2015-01-31 11:55:55.387 1789 ERROR glanceclient.common.http [req-923ed9a0-dda2-4c89-9ad2-b851f84a33d6 ] Request returned failure statu s 401.
2015-01-31 11:55:55.387 1789 ERROR nova.compute.manager [req-923ed9a0-dda2-4c89-9ad2-b851f84a33d6 None] [instance: f5e7446e-5272-4be0- bfe0-2e2037f65b43] Error while trying to clean up image 983b6452-d9aa-4224-a8a4-6f942e31ef76
2015-01-31 11:55:55.387 1789 TRACE nova.compute.manager [instance: f5e7446e-5272-4be0-bfe0-2e2037f65b43] Traceback (most recent call l ast):
2015-01-31 11:55:55.387 1789 TRACE nova.compute.manager [instance: ...

Changed in glance:
status: Incomplete → New
Changed in keystone:
status: Incomplete → New
Revision history for this message
Timur Nurlygayanov (tnurlygayanov) wrote :
Download full text (5.0 KiB)

I can see the same error during the image upload into Glance. With Swift backend I got this error for files with size more than 15Gb, if I use simple file system, I got the same error for files with size more than 30Gb.
My token expiration time is 1000 hours, in console I can see the following:

[root@controller]# glance image-create --name 34Gb-image --disk-format qcow2 --container-format bare --file ./MyImage.qcow2 --progress
[=============================>] 100%
Request returned failure status.
500 Internal Server Error
Failed to upload image 66418414-0b29-4ffe-b8d0-aa42c71058ce
(HTTP 500)
[root@controller]#

In Glance logs I can see the following:

________________
<150>Feb 10 00:37:18 node-1 glance-glance.registry.api.v1.images INFO: Updating metadata for image 66418414-0b29-4ffe-b8d0-aa42c71058ce
<150>Feb 10 00:37:18 node-1 glance-glance.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 00:37:18] "PUT /images/66418414-0b29-4ffe-b8d0-aa42c71058ce HTTP/1.1" 200 689 0.019713
<150>Feb 10 01:12:50 node-1 glance-urllib3.connectionpool INFO: Starting new HTTP connection (1): 192.168.0.4
<150>Feb 10 01:12:51 node-1 glance-urllib3.connectionpool INFO: Starting new HTTP connection (1): 192.168.0.4
<150>Feb 10 01:12:51 node-1 glance-urllib3.connectionpool INFO: Starting new HTTP connection (1): 192.168.0.4
<150>Feb 10 01:12:51 node-1 glance-glance.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 01:12:51] "PUT /images/66418414-0b29-4ffe-b8d0-aa42c71058ce HTTP/1.1" 401 194 0.554667
<150>Feb 10 01:12:51 node-1 glance-glance.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 01:12:51] "PUT /images/66418414-0b29-4ffe-b8d0-aa42c71058ce HTTP/1.1" 401 194 0.000398
<150>Feb 10 01:12:51 node-1 glance-glance.registry.client.v1.client INFO: Registry client request PUT /images/66418414-0b29-4ffe-b8d0-aa42c71058ce raised NotAuthenticated
<147>Feb 10 01:12:51 node-1 glance-glance.api.v1.upload_utils ERROR: Failed to upload image 66418414-0b29-4ffe-b8d0-aa42c71058ce
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/glance/api/v1/upload_utils.py", line 145, in upload_data_to_store
    update_data)
  File "/usr/lib/python2.6/site-packages/glance/registry/client/v1/api.py", line 169, in update_image_metadata
    from_state=from_state)
  File "/usr/lib/python2.6/site-packages/glance/registry/client/v1/client.py", line 190, in update_image
    headers=headers)
  File "/usr/lib/python2.6/site-packages/glance/registry/client/v1/client.py", line 107, in do_request
    **kwargs)
  File "/usr/lib/python2.6/site-packages/glance/common/client.py", line 69, in wrapped
    return func(self, *args, **kwargs)
  File "/usr/lib/python2.6/site-packages/glance/common/client.py", line 386, in do_request
    headers=copy.deepcopy(headers))
  File "/usr/lib/python2.6/site-packages/glance/common/client.py", line 83, in wrapped
    return func(self, method, url, body, headers)
  File "/usr/lib/python2.6/site-packages/glance/common/client.py", line 527, in _do_request
    raise exception.NotAuthenticated(res.read())
NotAuthenticated: Authentication required
<150>Feb 10 01:12:51 node-1 glance-glance.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 01:12:51] "PUT /images/66...

Read more...

Revision history for this message
Timur Nurlygayanov (tnurlygayanov) wrote :
Download full text (6.6 KiB)

In keystone logs I can see the following:

<188>Feb 10 05:39:15 node-1 keystone-keystone.middleware.core WARNING: RBAC: Invalid token
<188>Feb 10 05:39:15 node-1 keystone-keystone.common.wsgi WARNING: The request you have made requires authentication.
<190>Feb 10 05:39:15 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:39:15] "GET /v2.0/tokens/7d3a842e ef914967ba92a1e03210d08b HTTP/1.1" 401 312 0.002939
<190>Feb 10 05:39:15 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:39:15] "POST /v2.0/tokens HTTP/1. 1" 200 3927 0.049472
<190>Feb 10 05:39:15 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:39:15] "GET /v2.0/tokens/7d3a842e ef914967ba92a1e03210d08b HTTP/1.1" 200 3927 0.005070
<188>Feb 10 05:39:15 node-1 keystone-keystone.middleware.core WARNING: RBAC: Invalid token
<188>Feb 10 05:39:15 node-1 keystone-keystone.common.wsgi WARNING: The request you have made requires authentication.
<190>Feb 10 05:39:15 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:39:15] "GET /v2.0/tokens/7d3a842e ef914967ba92a1e03210d08b HTTP/1.1" 401 312 0.002919
<190>Feb 10 05:39:15 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:39:15] "POST /v2.0/tokens HTTP/1. 1" 200 3927 0.050005
<190>Feb 10 05:39:15 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:39:15] "GET /v2.0/tokens/7d3a842e ef914967ba92a1e03210d08b HTTP/1.1" 200 3927 0.004965
<188>Feb 10 05:43:16 node-1 keystone-keystone.common.wsgi WARNING: Could not find token, 7d3a842eef914967ba92a1e03210d08b.
<190>Feb 10 05:43:16 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:43:16] "GET /v2.0/tokens/7d3a842e ef914967ba92a1e03210d08b HTTP/1.1" 404 252 0.005836
<190>Feb 10 05:43:16 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:43:16] "POST /v2.0/tokens HTTP/1. 1" 200 3927 0.057720
<190>Feb 10 05:43:16 node-1 keystone-eventlet.wsgi.server INFO: 192.168.0.4 - - [10/Feb/2015 05:43:16] "GET /v2.0/tokens/be7eca85 41df458cb2f014e2ae744353 HTTP/1.1" 200 3927 0.005301
<190>Feb 10 05:44:26 node-1 keystone-keystone.common.environment INFO: Environment configured as: eventlet
<190>Feb 10 05:44:26 node-1 keystone-keystone.common.kvs.core INFO: Using default dogpile sha1_mangle_key as KVS region token-dri ver key_mangler
<188>Feb 10 05:44:26 node-1 keystone-keystone.token.provider WARNING: keystone.conf [signing] token_format is deprecated in favor of keystone.conf [token] provider
<188>Feb 10 05:44:26 node-1 keysto...

Read more...

Revision history for this message
Timur Nurlygayanov (tnurlygayanov) wrote :

Update: I have fixed the issue - need to edit default values for paramters in keystone configuration file:

/etc/keystone/keystone.conf:
[DEFAULT]
...
debug = True
...
[cache]
expiration_time = 6000000 (default value was 600 seconds)
...
[signing]
#token_format=UUID (because it is deprecated)
...
[token]
expiration=3600000 (default value was 3600 seconds)
provider=keystone.token.providers.uuid.Provider (default value was 'None')
revocation_cache_time=360000000 (default value was 3600 seconds)

________________

and then I have restarted keystone:
/etc/init.d/openstack-keystone restart

________________

And now I can successfully upload image with 40Gb size.

I hope that it will help :)
Probably we can set largest values for these parametersby default?
But, it is clear now - it is not a bug in the code of keystone and glance, it is the configuration issue.

Could please some one also try it and confirm that the root of this issue in the initial keystone configuration?

Revision history for this message
Michael Zhdanov (zhdan) wrote :
Download full text (12.5 KiB)

Hi

Timur, thank you for your update - it was partly solved my problem (it was possible to upload ~44Gb image but the next one failed again).

The errors:
/var/log/glance/api.log
2015-03-02 09:25:22.876 2138 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): jcloud
2015-03-02 09:25:22.913 2138 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-03-02 09:25:22.913 2138 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2015-03-02 09:25:22.914 2138 INFO keystonemiddleware.auth_token [-] Invalid user token - deferring reject downstream

/var/log/nova/nova-compute.log
2015-03-02 09:25:07.202 1715 INFO nova.compute.resource_tracker [-] Compute_service record updated for jcnode1:jcnode1
2015-03-02 09:25:20.846 1715 ERROR glanceclient.common.http [req-57c46af1-50c8-40eb-ae70-8de3664144d1 ] Request returned failure status 401.
2015-03-02 09:25:20.847 1715 WARNING urllib3.connectionpool [req-57c46af1-50c8-40eb-ae70-8de3664144d1 ] HttpConnectionPool is full, discarding connection: jcloud
2015-03-02 09:25:22.912 1715 ERROR glanceclient.common.http [req-57c46af1-50c8-40eb-ae70-8de3664144d1 ] Request returned failure status 401.
2015-03-02 09:25:22.913 1715 ERROR nova.compute.manager [req-57c46af1-50c8-40eb-ae70-8de3664144d1 None] [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] Error while trying to clean up image 3ce08382-fe16-4261-8525-780164e4f8fc
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] Traceback (most recent call last):
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 375, in decorated_function
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] self.image_api.delete(context, image_id)
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] File "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 137, in delete
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] return session.delete(context, image_id)
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 418, in delete
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] self._client.call(context, 1, 'delete', image_id)
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 232, in call
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] return getattr(client.images, method)(*args, **kwargs)
2015-03-02 09:25:22.913 1715 TRACE nova.compute.manager [instance: fa993ee3-3d0c-4832-95d9-f23ad858208f] File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 255, in delete
2015-03-02 09:25:22.913 1715 TRACE nova.compute.mana...

Revision history for this message
Michael Zhdanov (zhdan) wrote :

Additionally: I use Ubuntu 14.04.2 LTS and OpenStack Juno release (latest updates installed).
Regarding previous comment: one successful Image was done from node3, and failed from node1. All configurations on these nodes are correct. Small Images could be done from node1 also.

Revision history for this message
Morgan Fainberg (mdrnstm) wrote :

From a Keystone perspective, this is strictly the token TTL (at best). This can be resolved by increasing the TTL. We can evaluate changing the defaults, but I am unsure how to calculate what the default *should* be for all use cases. We will continue to iterate on this in general, however, this is not a keystone bug per-say. So i am marking this as wont fix within Keystone.

We should provide better documentation on the token TTL and re-evaluate the defaults.

Changed in keystone:
status: New → Won't Fix
Ian Cordasco (icordasc)
tags: added: propose-close
Revision history for this message
Sean Dague (sdague) wrote :

This isn't really a nova bug. Keystone has no way to create operation tokens for long standing work. It's a design limitation and not something nova can really work around.

Changed in nova:
status: Incomplete → Invalid
Revision history for this message
randhir singh (randhirsingh21) wrote :

any update on this issue? I am facing the same issue and my snapshot image size is 38.4 GB.
Any one can help me please?

I want to make a snapshot of my running instance. which has size 38.4 GB.

Revision history for this message
randhir singh (randhirsingh21) wrote :

Thanks for this post... issue get resolved by updating the keystone configurations.

Revision history for this message
Michael Zhdanov (zhdan) wrote :

Hi All

Thank to all involved - keystone configuration was the problem (but for my case almost all default values were increased more than in 3 times).

Changed in glance:
status: New → Invalid
Revision history for this message
Christian Sarrasin (sxc731) wrote :

I know this is an old bug and it's marked invalid but I just experienced it several times in the past few days (this is on Kilo).

At first I thought that upping the Keystone expiry time to 4 hours (ie: expiration = 14400) had taken care of the issue but then I just tried to snapshot an 80GB (Windows of course!) disk and it happened again some 26 minutes into the process (ie: glance-registry announced "Successfully created image").

26 mins is clearly <4 hours (and <1 hour for that matter). So it appears no matter what you set the Keystone TTL to, if you're "unlucky" and expiry happens in the middle of a lengthy Glance operation, it'll fail. Couldn't Glance mitigate this by requesting a fresh token before undertaking a lengthy operation (say if the size of the image is greater than X)? Just a thought...

Revision history for this message
Favyen Bastani (fbastani) wrote :

I am still getting this bug on Liberty (package version is 2:11.0.1-0ubuntu1~cloud0).

Keystone configuration:

[cache]
expiration_time=60000
[token]
expiration=60000
[revoke]
cache_time=60000

glance-registry log:

2017-01-05 01:51:56.262 22321 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2017-01-05 01:51:56.262 22321 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: 4f5755cc[snip]ee", "code": 404, "title": "Not Found"}}
2017-01-05 01:51:56.263 22321 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2017-01-05 01:51:56.264 22321 INFO eventlet.wsgi.server [-] 127.0.0.1 - - [05/Jan/2017 01:51:56] "PUT /images/62cdc66d-8ba3-477f-bb66-6b7df6cddfce HTTP/1.1" 401 204 0.028659
2017-01-05 01:51:56.269 22350 INFO eventlet.wsgi.server [-] 127.0.0.1 - - [05/Jan/2017 01:51:56] "PUT /images/62cdc66d-8ba3-477f-bb66-6b7df6cddfce HTTP/1.1" 401 204 0.001075
2017-01-05 01:51:58.420 22376 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2017-01-05 01:51:58.421 22376 WARNING keystonemiddleware.auth_token [-] Identity response: {"error": {"message": "Could not find token: 4f5755cc[snip]ee", "code": 404, "title": "Not Found"}}
2017-01-05 01:51:58.421 22376 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
2017-01-05 01:51:58.424 22376 INFO eventlet.wsgi.server [-] 127.0.0.1 - - [05/Jan/2017 01:51:58] "PUT /images/62cdc66d-8ba3-477f-bb66-6b7df6cddfce HTTP/1.1" 401 204 0.027043
2017-01-05 01:51:58.426 22374 INFO eventlet.wsgi.server [-] 127.0.0.1 - - [05/Jan/2017 01:51:58] "PUT /images/62cdc66d-8ba3-477f-bb66-6b7df6cddfce HTTP/1.1" 401 204 0.001307

Relevant lines from nova-compute log attached.

The keystone.token table says this token (4f5755cc[snip]ee) was issued at 2017-01-04T14:11:50.137811Z and expires at 2017-01-05T06:51:50.137783Z. The logs are in EST (UTC-5), so it expired right before glance gave the authorization failed error, but before the snapshot process started.

Is there any way to have glance renew the token well before the expiration time?

Revision history for this message
Favyen Bastani (fbastani) wrote :
Revision history for this message
Tuan (tuanla) wrote :

You can add in local.conf to pass the bug

SWIFT_LOOPBACK_DISK_SIZE=100G
[[post-config|${SWIFT_CONF_DIR}/swift.conf]]
[swift-constraints]
max_file_size = 21474836482

From <https://ask.openstack.org/en/question/66825/error-during-chunked-upload-cant-create-large-glance-images/>

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.