cinder backup create fails with cannot allocate memory error while performing 1TB backup

Bug #1771737 reported by Chhavi Agarwal
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Incomplete
Undecided
Unassigned

Bug Description

While performing a 1TB backup on swift backup server the cinder-backup operations fails with the below error. After running for 7 hours the below error is thrown.

System has approx 5GB of swap memory
[root@ia-dnd-2tb-data cinder]# free -m
              total used free shared buff/cache available
Mem: 15320 14310 389 29 620 239
Swap: 5612 2905 2707

2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server [req-2468cad0-1800-47af-9fd9-7327087e1705 0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 9a6a7f970145423194fa80db3bf077f7 - - -] Exception during message handling: OSError: [Errno 12] Cannot allocate memory
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/backup/manager.py", line 413, in create_backup
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server self._update_backup_error(backup, six.text_type(err))
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server self.force_reraise()
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/backup/manager.py", line 402, in create_backup
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server updates = self._run_backup(context, backup, volume)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/backup/manager.py", line 485, in _run_backup
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server backup, tpool.Proxy(device_file))
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server self.gen.throw(type, value, traceback)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 449, in temporary_chown
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server execute('chown', orig_uid, path, run_as_root=True)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 126, in execute
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server return processutils.execute(*cmd, **kwargs)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 391, in execute
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server env=env_variables)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/site-packages/eventlet/green/subprocess.py", line 55, in __init__
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server errread, errwrite)
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server File "/usr/lib64/python2.7/subprocess.py", line 1224, in _execute_child
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server self.pid = os.fork()
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server OSError: [Errno 12] Cannot allocate memory
2018-05-16 14:11:46.498 23633 ERROR oslo_messaging.rpc.server
2018-05-16 14:11:48.528 23633 WARNING oslo.messaging._drivers.impl_rabbit [-] Unexpected error during heartbeart thread processing, retrying...: IOError: Socket closed

summary: - cinder backup create fails withs Out of memory error while performing
- 1TB backup
+ cinder backup create fails withs cannot allocate memory error while
+ performing 1TB backup
summary: - cinder backup create fails withs cannot allocate memory error while
+ cinder backup create fails with cannot allocate memory error while
performing 1TB backup
Revision history for this message
Eric Harney (eharney) wrote :

Please include some info about what exactly is consuming memory on the system. This could be unrelated to Cinder, it's not possible to tell just from this report.

Changed in cinder:
status: New → Incomplete
Revision history for this message
Chhavi Agarwal (chhagarw) wrote :

I could see the cinder-backup using the maximum memory

[root@ia-dnd-2tb-data ~]# ps aux --sort=-%mem | awk 'NR<=10{print $0}'
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
cinder 23633 35.7 47.9 10338752 7522688 ? Ssl May16 652:26 /usr/bin/python /usr/bin/cinder-backup --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/backup.log
keystone 1690 0.7 1.4 822464 228992 ? Sl May14 34:13 keystone-public -DFOREGROUND
cinder 10349 2.2 1.4 509696 224640 ? Ss Apr18 942:13 /usr/bin/python /usr/bin/cinder-ibm-health --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/ibm-health.log
nova 11806 0.0 1.3 427712 204352 ? S Apr18 36:59 /usr/bin/python /usr/bin/nova-api
nova 11805 0.0 1.2 427200 203776 ? S Apr18 39:59 /usr/bin/python /usr/bin/nova-api
nova 11807 0.0 1.2 427776 203648 ? S Apr18 38:28 /usr/bin/python /usr/bin/nova-api
nova 11804 0.0 1.2 427584 203264 ? S Apr18 39:14 /usr/bin/python /usr/bin/nova-api
cinder 23793 0.1 1.2 461632 201920 ? S May16 3:05 /usr/bin/python /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log
n

Revision history for this message
prashkre (prashkre) wrote :

Attaching memory profile log showing increase in usage of memory while taking 1TB backup.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.