Cgroups memory limit are causing the virt/qemu to be terminated unexpectedly

Bug #1102290 reported by Ritesh Khadgaray
14
This bug affects 3 people
Affects Status Importance Assigned to Milestone
libvirt (Ubuntu)
Fix Released
High
Unassigned

Bug Description

from syslog

Jan 21 11:36:26 x230 kernel: [ 1408.511611] kvm-spice invoked oom-killer: gfp_mask=0x50, order=0, oom_score_adj=0
Jan 21 11:36:26 x230 kernel: [ 1408.511615] kvm-spice cpuset=emulator mems_allowed=0
Jan 21 11:36:26 x230 kernel: [ 1408.511617] Pid: 3594, comm: kvm-spice Tainted: GF 3.8.0-1-generic #5-Ubuntu
Jan 21 11:36:26 x230 kernel: [ 1408.511618] Call Trace:
Jan 21 11:36:26 x230 kernel: [ 1408.511623] [<ffffffff810cfcad>] ? cpuset_print_task_mems_allowed+0x9d/0xb0
Jan 21 11:36:26 x230 kernel: [ 1408.511627] [<ffffffff816bc0e8>] dump_header+0x80/0x1c1
Jan 21 11:36:26 x230 kernel: [ 1408.511630] [<ffffffff81131f1e>] ? find_lock_task_mm+0x2e/0x80
Jan 21 11:36:26 x230 kernel: [ 1408.511633] [<ffffffff81188f98>] ? try_get_mem_cgroup_from_mm+0x48/0x60
Jan 21 11:36:26 x230 kernel: [ 1408.511635] [<ffffffff81132317>] oom_kill_process+0x1b7/0x310
Jan 21 11:36:26 x230 kernel: [ 1408.511636] [<ffffffff81189090>] ? mem_cgroup_iter+0xe0/0x1e0
Jan 21 11:36:26 x230 kernel: [ 1408.511638] [<ffffffff81065807>] ? has_capability_noaudit+0x17/0x20
Jan 21 11:36:26 x230 kernel: [ 1408.511641] [<ffffffff8118be25>] __mem_cgroup_try_charge+0xb05/0xb50
Jan 21 11:36:26 x230 kernel: [ 1408.511643] [<ffffffff8118c550>] ? mem_cgroup_charge_common+0xa0/0xa0
Jan 21 11:36:26 x230 kernel: [ 1408.511645] [<ffffffff8118c501>] mem_cgroup_charge_common+0x51/0xa0
Jan 21 11:36:26 x230 kernel: [ 1408.511647] [<ffffffff8118d18a>] mem_cgroup_cache_charge+0x7a/0x90
Jan 21 11:36:26 x230 kernel: [ 1408.511648] [<ffffffff8112f5ea>] add_to_page_cache_locked+0x4a/0x140
Jan 21 11:36:26 x230 kernel: [ 1408.511650] [<ffffffff8112f6fa>] add_to_page_cache_lru+0x1a/0x40
Jan 21 11:36:26 x230 kernel: [ 1408.511652] [<ffffffff8112f7b2>] grab_cache_page_write_begin+0x92/0xf0
Jan 21 11:36:26 x230 kernel: [ 1408.511655] [<ffffffff81233a21>] ext4_da_write_begin+0xb1/0x260
Jan 21 11:36:26 x230 kernel: [ 1408.511657] [<ffffffff8112e5c3>] generic_file_buffered_write+0x113/0x270
Jan 21 11:36:26 x230 kernel: [ 1408.511659] [<ffffffff81130284>] __generic_file_aio_write+0x1b4/0x3d0
Jan 21 11:36:26 x230 kernel: [ 1408.511661] [<ffffffff811957f4>] ? __sb_start_write+0x54/0x110
Jan 21 11:36:26 x230 kernel: [ 1408.511662] [<ffffffff8113051f>] generic_file_aio_write+0x7f/0x100
Jan 21 11:36:26 x230 kernel: [ 1408.511664] [<ffffffff8122cf59>] ext4_file_write+0xa9/0x420
Jan 21 11:36:26 x230 kernel: [ 1408.511667] [<ffffffff81081066>] ? __remove_hrtimer+0x66/0xc0
Jan 21 11:36:26 x230 kernel: [ 1408.511669] [<ffffffff810816ea>] ? hrtimer_try_to_cancel+0x4a/0xd0
Jan 21 11:36:26 x230 kernel: [ 1408.511670] [<ffffffff8122ceb0>] ? ext4_unwritten_wait+0xc0/0xc0
Jan 21 11:36:26 x230 kernel: [ 1408.511672] [<ffffffff81193ce3>] do_sync_readv_writev+0xa3/0xe0
Jan 21 11:36:26 x230 kernel: [ 1408.511674] [<ffffffff81193fb4>] do_readv_writev+0xd4/0x1e0
Jan 21 11:36:26 x230 kernel: [ 1408.511676] [<ffffffff810b74e1>] ? do_futex+0x111/0x5a0
Jan 21 11:36:26 x230 kernel: [ 1408.511678] [<ffffffff811940f5>] vfs_writev+0x35/0x60
Jan 21 11:36:26 x230 kernel: [ 1408.511680] [<ffffffff811944a2>] sys_pwritev+0xc2/0xe0
Jan 21 11:36:26 x230 kernel: [ 1408.511683] [<ffffffff816cfd1d>] system_call_fastpath+0x1a/0x1f
Jan 21 11:36:26 x230 kernel: [ 1408.511685] Task in /libvirt/qemu/Ubuntu-12.04.x86_64 killed as a result of limit of /libvirt/qemu/Ubuntu-12.04.x86_64

Jan 21 11:36:26 x230 kernel: [ 1408.511686] memory: usage 1341196kB, limit 1341196kB, failcnt 217479
Jan 21 11:36:26 x230 kernel: [ 1408.511687] memory+swap: usage 0kB, limit 9007199254740991kB, failcnt 0
Jan 21 11:36:26 x230 kernel: [ 1408.511688] kmem: usage 0kB, limit 9007199254740991kB, failcnt 0
Jan 21 11:36:26 x230 kernel: [ 1408.511689] Mem-Info:
Jan 21 11:36:26 x230 kernel: [ 1408.511690] Node 0 DMA per-cpu:
Jan 21 11:36:26 x230 kernel: [ 1408.511692] CPU 0: hi: 0, btch: 1 usd: 0
Jan 21 11:36:26 x230 kernel: [ 1408.511692] CPU 1: hi: 0, btch: 1 usd: 0
Jan 21 11:36:26 x230 kernel: [ 1408.511693] CPU 2: hi: 0, btch: 1 usd: 0
Jan 21 11:36:26 x230 kernel: [ 1408.511694] CPU 3: hi: 0, btch: 1 usd: 0
Jan 21 11:36:26 x230 kernel: [ 1408.511695] Node 0 DMA32 per-cpu:
Jan 21 11:36:26 x230 kernel: [ 1408.511696] CPU 0: hi: 186, btch: 31 usd: 157
Jan 21 11:36:26 x230 kernel: [ 1408.511697] CPU 1: hi: 186, btch: 31 usd: 168
Jan 21 11:36:26 x230 kernel: [ 1408.511697] CPU 2: hi: 186, btch: 31 usd: 2
Jan 21 11:36:26 x230 kernel: [ 1408.511698] CPU 3: hi: 186, btch: 31 usd: 29
Jan 21 11:36:26 x230 kernel: [ 1408.511699] Node 0 Normal per-cpu:
Jan 21 11:36:26 x230 kernel: [ 1408.511700] CPU 0: hi: 186, btch: 31 usd: 117
Jan 21 11:36:26 x230 kernel: [ 1408.511701] CPU 1: hi: 186, btch: 31 usd: 168
Jan 21 11:36:26 x230 kernel: [ 1408.511702] CPU 2: hi: 186, btch: 31 usd: 47
Jan 21 11:36:26 x230 kernel: [ 1408.511703] CPU 3: hi: 186, btch: 31 usd: 108
Jan 21 11:36:26 x230 kernel: [ 1408.511705] active_anon:849769 inactive_anon:233379 isolated_anon:0
Jan 21 11:36:26 x230 kernel: [ 1408.511705] active_file:58571 inactive_file:213258 isolated_file:0
Jan 21 11:36:26 x230 kernel: [ 1408.511705] unevictable:8075 dirty:43807 writeback:0 unstable:0
Jan 21 11:36:26 x230 kernel: [ 1408.511705] free:2603702 slab_reclaimable:15602 slab_unreclaimable:9429
Jan 21 11:36:26 x230 kernel: [ 1408.511705] mapped:48755 shmem:85862 pagetables:9907 bounce:0
Jan 21 11:36:26 x230 kernel: [ 1408.511705] free_cma:0

Jan 21 11:36:26 x230 kernel: [ 1408.511707] Node 0 DMA free:15900kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15644kB managed:15900kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan 21 11:36:26 x230 kernel: [ 1408.511710] lowmem_reserve[]: 0 3264 15839 15839
Jan 21 11:36:26 x230 kernel: [ 1408.511712] Node 0 DMA32 free:3316564kB min:13916kB low:17392kB high:20872kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3343028kB managed:3291020kB mlocked:0kB dirty:0kB writeback:0kB mapped:8kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan 21 11:36:26 x230 kernel: [ 1408.511714] lowmem_reserve[]: 0 0 12574 12574
Jan 21 11:36:26 x230 kernel: [ 1408.511716] Node 0 Normal free:7082344kB min:53600kB low:67000kB high:80400kB active_anon:3399076kB inactive_anon:933516kB active_file:234284kB inactive_file:853032kB unevictable:32300kB isolated(anon):0kB isolated(file):0kB present:12876192kB managed:12818824kB mlocked:32300kB dirty:175228kB writeback:0kB mapped:195012kB shmem:343448kB slab_reclaimable:62408kB slab_unreclaimable:37716kB kernel_stack:4568kB pagetables:39628kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jan 21 11:36:26 x230 kernel: [ 1408.511719] lowmem_reserve[]: 0 0 0 0
Jan 21 11:36:26 x230 kernel: [ 1408.511720] Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (R) 3*4096kB (M) = 15900kB
Jan 21 11:36:26 x230 kernel: [ 1408.511727] Node 0 DMA32: 5*4kB (M) 8*8kB (UM) 4*16kB (M) 8*32kB (M) 7*64kB (M) 6*128kB (M) 7*256kB (UM) 5*512kB (UM) 5*1024kB (UM) 6*2048kB (UM) 804*4096kB (MR) = 3316564kB
Jan 21 11:36:26 x230 kernel: [ 1408.511734] Node 0 Normal: 116*4kB (UEM) 218*8kB (UEM) 162*16kB (UEM) 218*32kB (UEM) 90*64kB (UEM) 60*128kB (UE) 43*256kB (UE) 22*512kB (UE) 10*1024kB (UM) 6*2048kB (UE) 1712*4096kB (UMR) = 7082368kB
Jan 21 11:36:26 x230 kernel: [ 1408.511741] 366822 total pagecache pages
Jan 21 11:36:26 x230 kernel: [ 1408.511742] 7657 pages in swap cache
Jan 21 11:36:26 x230 kernel: [ 1408.511743] Swap cache stats: add 7658, delete 1, find 1/1
Jan 21 11:36:26 x230 kernel: [ 1408.511744] Free swap = 16463960kB
Jan 21 11:36:26 x230 kernel: [ 1408.511745] Total swap = 16494588kB
Jan 21 11:36:26 x230 kernel: [ 1408.533073] 4154864 pages RAM
Jan 21 11:36:26 x230 kernel: [ 1408.533078] 116655 pages reserved
Jan 21 11:36:26 x230 kernel: [ 1408.533081] 1397984 pages shared
Jan 21 11:36:26 x230 kernel: [ 1408.533085] 1220173 pages non-shared
Jan 21 11:36:26 x230 kernel: [ 1408.533088] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
Jan 21 11:36:26 x230 kernel: [ 1408.533141] [ 3321] 117 3321 1005418 285951 834 7658 0 kvm-spice
Jan 21 11:36:26 x230 kernel: [ 1408.533145] Memory cgroup out of memory: Kill process 3608 (kvm-spice) score 66 or sacrifice child
Jan 21 11:36:26 x230 kernel: [ 1408.533152] Killed process 3608 (kvm-spice) total-vm:4021672kB, anon-rss:1136936kB, file-rss:6868kB

Tags: raring

Related branches

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Could you please tell us how you configured both the memory cgroup and the VM? The kvm-spice VM is taking about 1.14G of memory - which is not at all unexpected, unless you asked it to take up 512M :)

So the first question I'm trying to answer is just whether there is a misconfiguration (the VM was given too much memory or the memory cgroup not enough - either by you or by defaults), or whether there is a memory leak in libvirt or kvm-spice.

Changed in libvirt (Ubuntu):
importance: Undecided → High
status: New → Incomplete
Revision history for this message
Ritesh Khadgaray (khadgaray) wrote :

I have not configured anything. default configuration, as it is. will post a dump of sysfs memory cgroups. Is there anything else required ?

 -- ritz

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Can you reliably reproduce this? Can you tell us the exact steps, preferably from system install to crash (but I'll take what I can get :)?

Thanks!

Revision history for this message
Ritesh Khadgaray (khadgaray) wrote :

Hi

After todays update, I am unable to reproduce this issue. Marking this as invalid.

This system is a 11.10->12.04->12.10->13.04 update, with default settings. I was trying to build libreoffice on two VMs ( 12.04/12.10), and running update on another ( Fedora rawhide ).

Thanks

Changed in libvirt (Ubuntu):
status: Incomplete → Invalid
Changed in libvirt (Ubuntu):
status: Invalid → New
Revision history for this message
Ritesh Khadgaray (khadgaray) wrote :
Download full text (13.3 KiB)

Hit this again

Jan 24 14:49:58 x230 kernel: [14552.596396] kvm-spice invoked oom-killer: gfp_mask=0x50, order=0, oom_score_adj=0
Jan 24 14:49:58 x230 kernel: [14552.596404] kvm-spice cpuset=emulator mems_allowed=0
Jan 24 14:49:58 x230 kernel: [14552.596409] Pid: 27542, comm: kvm-spice Tainted: GF 3.8.0-1-generic #5-Ubuntu
Jan 24 14:49:58 x230 kernel: [14552.596411] Call Trace:
Jan 24 14:49:58 x230 kernel: [14552.596422] [<ffffffff810cfcad>] ? cpuset_print_task_mems_allowed+0x9d/0xb0
Jan 24 14:49:58 x230 kernel: [14552.596431] [<ffffffff816bc0e8>] dump_header+0x80/0x1c1
Jan 24 14:49:58 x230 kernel: [14552.596438] [<ffffffff81131f1e>] ? find_lock_task_mm+0x2e/0x80
Jan 24 14:49:58 x230 kernel: [14552.596444] [<ffffffff81188f98>] ? try_get_mem_cgroup_from_mm+0x48/0x60
Jan 24 14:49:58 x230 kernel: [14552.596450] [<ffffffff81132317>] oom_kill_process+0x1b7/0x310
Jan 24 14:49:58 x230 kernel: [14552.596454] [<ffffffff81189090>] ? mem_cgroup_iter+0xe0/0x1e0
Jan 24 14:49:58 x230 kernel: [14552.596460] [<ffffffff81065807>] ? has_capability_noaudit+0x17/0x20
Jan 24 14:49:58 x230 kernel: [14552.596465] [<ffffffff8118be25>] __mem_cgroup_try_charge+0xb05/0xb50
Jan 24 14:49:58 x230 kernel: [14552.596471] [<ffffffff8118c550>] ? mem_cgroup_charge_common+0xa0/0xa0
Jan 24 14:49:58 x230 kernel: [14552.596476] [<ffffffff8118c501>] mem_cgroup_charge_common+0x51/0xa0
Jan 24 14:49:58 x230 kernel: [14552.596481] [<ffffffff8118d18a>] mem_cgroup_cache_charge+0x7a/0x90
Jan 24 14:49:58 x230 kernel: [14552.596485] [<ffffffff8112f5ea>] add_to_page_cache_locked+0x4a/0x140
Jan 24 14:49:58 x230 kernel: [14552.596490] [<ffffffff8112f6fa>] add_to_page_cache_lru+0x1a/0x40
Jan 24 14:49:58 x230 kernel: [14552.596494] [<ffffffff8112f7b2>] grab_cache_page_write_begin+0x92/0xf0
Jan 24 14:49:58 x230 kernel: [14552.596502] [<ffffffff81233a21>] ext4_da_write_begin+0xb1/0x260
Jan 24 14:49:58 x230 kernel: [14552.596507] [<ffffffff8112e5c3>] generic_file_buffered_write+0x113/0x270
Jan 24 14:49:58 x230 kernel: [14552.596512] [<ffffffff81130284>] __generic_file_aio_write+0x1b4/0x3d0
Jan 24 14:49:58 x230 kernel: [14552.596516] [<ffffffff811957f4>] ? __sb_start_write+0x54/0x110
Jan 24 14:49:58 x230 kernel: [14552.596520] [<ffffffff8113051f>] generic_file_aio_write+0x7f/0x100
Jan 24 14:49:58 x230 kernel: [14552.596526] [<ffffffff8122cf59>] ext4_file_write+0xa9/0x420
Jan 24 14:49:58 x230 kernel: [14552.596533] [<ffffffff810816ea>] ? hrtimer_try_to_cancel+0x4a/0xd0
Jan 24 14:49:58 x230 kernel: [14552.596538] [<ffffffff8122ceb0>] ? ext4_unwritten_wait+0xc0/0xc0
Jan 24 14:49:58 x230 kernel: [14552.596542] [<ffffffff81193ce3>] do_sync_readv_writev+0xa3/0xe0
Jan 24 14:49:58 x230 kernel: [14552.596547] [<ffffffff81193fb4>] do_readv_writev+0xd4/0x1e0
Jan 24 14:49:58 x230 kernel: [14552.596553] [<ffffffff810b74e1>] ? do_futex+0x111/0x5a0
Jan 24 14:49:58 x230 kernel: [14552.596558] [<ffffffff811940f5>] vfs_writev+0x35/0x60
Jan 24 14:49:58 x230 kernel: [14552.596562] [<ffffffff811944a2>] sys_pwritev+0xc2/0xe0
Jan 24 14:49:58 x230 kernel: [14552.596568] [<ffffffff816cfd1d>] system_call_fastpath+0x1a/0x1f
Jan 24 14:49:58 x230 kernel: [14552.596572] Task in /libvirt/qem...

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Thanks. I think upstream commit 3c83df679e8feab939c08b1f97c48f9290a5b8cd might fix this.

Changed in libvirt (Ubuntu):
status: New → Triaged
Revision history for this message
Ritesh Khadgaray (khadgaray) wrote :

Thanks, will test this and report back.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package libvirt - 1.0.1-0ubuntu2

---------------
libvirt (1.0.1-0ubuntu2) raring; urgency=low

  * add qemu-relax-hard-rss-limit.rss to avoid OOM kills (LP: #1102290)
  * debian/rules: replace --without-vbox with --with-vbox (LP: #1103721)
 -- Serge Hallyn <email address hidden> Thu, 24 Jan 2013 13:00:48 -0600

Changed in libvirt (Ubuntu):
status: Triaged → Fix Released
Revision history for this message
Ritesh Khadgaray (khadgaray) wrote :

no go. I still see the issue.

Revision history for this message
Ritesh Khadgaray (khadgaray) wrote :
Download full text (8.8 KiB)

Jan 25 01:39:08 x230 kernel: [19883.259594] kvm-spice invoked oom-killer: gfp_mask=0x50, order=0, oom_score_adj=0
Jan 25 01:39:08 x230 kernel: [19883.259599] kvm-spice cpuset=emulator mems_allowed=0
Jan 25 01:39:08 x230 kernel: [19883.259601] Pid: 14971, comm: kvm-spice Tainted: GF 3.8.0-1-generic #5-Ubuntu
Jan 25 01:39:08 x230 kernel: [19883.259603] Call Trace:
Jan 25 01:39:08 x230 kernel: [19883.259609] [<ffffffff810cfcad>] ? cpuset_print_task_mems_allowed+0x9d/0xb0
Jan 25 01:39:08 x230 kernel: [19883.259615] [<ffffffff816bc0e8>] dump_header+0x80/0x1c1
Jan 25 01:39:08 x230 kernel: [19883.259618] [<ffffffff81131f1e>] ? find_lock_task_mm+0x2e/0x80
Jan 25 01:39:08 x230 kernel: [19883.259622] [<ffffffff81188f98>] ? try_get_mem_cgroup_from_mm+0x48/0x60
Jan 25 01:39:08 x230 kernel: [19883.259625] [<ffffffff81132317>] oom_kill_process+0x1b7/0x310
Jan 25 01:39:08 x230 kernel: [19883.259627] [<ffffffff81189090>] ? mem_cgroup_iter+0xe0/0x1e0
Jan 25 01:39:08 x230 kernel: [19883.259631] [<ffffffff81065807>] ? has_capability_noaudit+0x17/0x20
Jan 25 01:39:08 x230 kernel: [19883.259634] [<ffffffff8118be25>] __mem_cgroup_try_charge+0xb05/0xb50
Jan 25 01:39:08 x230 kernel: [19883.259637] [<ffffffff8118c550>] ? mem_cgroup_charge_common+0xa0/0xa0
Jan 25 01:39:08 x230 kernel: [19883.259640] [<ffffffff8118c501>] mem_cgroup_charge_common+0x51/0xa0
Jan 25 01:39:08 x230 kernel: [19883.259643] [<ffffffff8118d18a>] mem_cgroup_cache_charge+0x7a/0x90
Jan 25 01:39:08 x230 kernel: [19883.259645] [<ffffffff8112f5ea>] add_to_page_cache_locked+0x4a/0x140
Jan 25 01:39:08 x230 kernel: [19883.259648] [<ffffffff8112f6fa>] add_to_page_cache_lru+0x1a/0x40
Jan 25 01:39:08 x230 kernel: [19883.259650] [<ffffffff8112f7b2>] grab_cache_page_write_begin+0x92/0xf0
Jan 25 01:39:08 x230 kernel: [19883.259654] [<ffffffff81233a21>] ext4_da_write_begin+0xb1/0x260
Jan 25 01:39:08 x230 kernel: [19883.259656] [<ffffffff8112e5c3>] generic_file_buffered_write+0x113/0x270
Jan 25 01:39:08 x230 kernel: [19883.259659] [<ffffffff81130284>] __generic_file_aio_write+0x1b4/0x3d0
Jan 25 01:39:08 x230 kernel: [19883.259662] [<ffffffff811957f4>] ? __sb_start_write+0x54/0x110
Jan 25 01:39:08 x230 kernel: [19883.259664] [<ffffffff8113051f>] generic_file_aio_write+0x7f/0x100
Jan 25 01:39:08 x230 kernel: [19883.259666] [<ffffffff8122cf59>] ext4_file_write+0xa9/0x420
Jan 25 01:39:08 x230 kernel: [19883.259670] [<ffffffff810816ea>] ? hrtimer_try_to_cancel+0x4a/0xd0
Jan 25 01:39:08 x230 kernel: [19883.259672] [<ffffffff8122ceb0>] ? ext4_unwritten_wait+0xc0/0xc0
Jan 25 01:39:08 x230 kernel: [19883.259675] [<ffffffff81193ce3>] do_sync_readv_writev+0xa3/0xe0
Jan 25 01:39:08 x230 kernel: [19883.259677] [<ffffffff81193fb4>] do_readv_writev+0xd4/0x1e0
Jan 25 01:39:08 x230 kernel: [19883.259680] [<ffffffff810b74e1>] ? do_futex+0x111/0x5a0
Jan 25 01:39:08 x230 kernel: [19883.259682] [<ffffffff811940f5>] vfs_writev+0x35/0x60
Jan 25 01:39:08 x230 kernel: [19883.259684] [<ffffffff811944a2>] sys_pwritev+0xc2/0xe0
Jan 25 01:39:08 x230 kernel: [19883.259687] [<ffffffff816cfd1d>] system_call_fastpath+0x1a/0x1f
Jan 25 01:39:08 x230 kernel: [19883.259690] Task in /libvirt/qemu/Ubuntu-11.10-x...

Read more...

Changed in libvirt (Ubuntu):
status: Fix Released → Triaged
status: Triaged → Fix Released
Revision history for this message
Ritesh Khadgaray (khadgaray) wrote :

Reboot with the new packages. Works fine. Thanks.

Revision history for this message
Serge Hallyn (serge-hallyn) wrote : Re: [Bug 1102290] Re: Cgroups memory limit are causing the virt/qemu to be terminated unexpectedly

Quoting Ritesh Khadgaray (<email address hidden>):
> Reboot with the new packages. Works fine. Thanks.

Thanks - if you do see it again, please re-open this bug.

Revision history for this message
Mike (mike32767) wrote :
Download full text (3.4 KiB)

I am still seeing this issue with libvirt 1.1.1-0ubuntu8, although it has only (just) happened once after several months of no problems.

VM (called london) has:

<domain type="kvm">
  ...
  <memory unit='KiB'>1024000</memory>
  <currentMemory unit='KiB'>1024000</currentMemory>
...
</domain>

and "virsh memtune london" gives:

hard_limit : 1992192
soft_limit : unlimited
swap_hard_limit: unlimited

There is no swap on the host and plenty of RAM available - from top:

    KiB Mem: 32896936 total, 23104716 used, 9792220 free, 10520 buffers
    KiB Swap: 0 total, 0 used, 0 free, 4305340 cached

      PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    22393 libvirt- 20 0 14.6g 7.9g 6812 S 1 25.2 3364:39 qemu-system-x86
     2691 libvirt- 20 0 9244m 4.0g 804 S 0 12.7 1316:37 qemu-system-x86
     9535 libvirt- 20 0 5666m 1.0g 1052 S 0 3.3 115:25.26 qemu-system-x86
    24739 libvirt- 20 0 2478m 1.0g 7176 S 0 3.1 68:16.96 qemu-system-x86
    31162 libvirt- 20 0 2478m 474m 9756 S 1 1.5 0:57.18 qemu-system-x86
     2216 root 20 0 944m 29m 3556 S 0 0.1 0:05.99 libvirtd

yet I get this:

Mar 18 06:55:41 fractal kernel: [2376404.957208] Task in /machine/london.libvirt-qemu killed as a result of limit of /machine/london.libvirt-qemu
Mar 18 06:55:41 fractal kernel: [2376404.957212] memory: usage 1992192kB, limit 1992192kB, failcnt 261
Mar 18 06:55:41 fractal kernel: [2376404.957215] memory+swap: usage 0kB, limit 9007199254740991kB, failcnt 0
Mar 18 06:55:41 fractal kernel: [2376404.957217] kmem: usage 0kB, limit 9007199254740991kB, failcnt 0
Mar 18 06:55:41 fractal kernel: [2376404.957219] Memory cgroup stats for /machine/london.libvirt-qemu: cache:104KB rss:1992088KB rss_huge:6144KB mapped_file:20KB inactive_anon:16KB active_anon:1991592KB inactive_file:76KB active_file:0KB unevictable:0KB
Mar 18 06:55:41 fractal kernel: [2376404.957241] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
Mar 18 06:55:41 fractal kernel: [2376404.957365] [ 2618] 106 2618 1451707 259898 792 0 0 qemu-system-x86
Mar 18 06:55:41 fractal kernel: [2376404.957387] Memory cgroup out of memory: Kill process 30400 (qemu-system-x86) score 523 or sacrifice child

Furthermore I am unable to use the workaround suggested up the equivalent Redhat bug report (https://bugzilla.redhat.com/show_bug.cgi?id=891653) of increasing the cgroup limit with memtune:

# virsh memtune london
hard_limit : 1992192
soft_limit : unlimited
swap_hard_limit: unlimited

# virsh memtune london --hard-limit=4096000
...

Read more...

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

@Mike,

could you please open a new bug? If I understand correctly, the problem for you is with the checks guarding changes to memory limits.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.