[Hyper-V] Dynamic Memory HotAdd memory demand increases very fast to maximum

Bug #1756129 reported by Ionut Lenghel
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
linux-azure (Ubuntu)
Confirmed
High
Unassigned

Bug Description

Issue description: Memory demand increases very fast up to the maximum available memory, call traces show up in /var/log/syslog, the VM becomes unresponsive during the memory consumption, but it becomes responsive right after stress-ng ends its execution, therefore we can say the issue might be in the Dynamic Memory feature.

Platform: host independent

Distribution name and release:
 - Bionic (4.15.0-10-generic)
 - Linux Azure kernel (4.13.0-1012-azure)

Repro rate: 100%

VM configuration:
 Kernel: 4.13.0-1012-azure
 CPU: 8 cores
 RAM: 2048 MB
 Enable Dynamic Memory: Yes
 Minimum RAM: 512 MB
 Maximum RAM: 8192 MB
 Memory buffer: 20%
 Memory weight: 100%

Repro steps:
 1. Start the VM with the above configuration.
 2. Run stress-ng with the following parameters:
  stress-ng -m 16 --vm-bytes 256M -t 200 --backoff 10000000
 3. In less than 60 seconds the entire memory is consumed.

Additional info:
 1. We also tested this on Xenial with default kernel and the issue does not occur.
 2. The call trace message from syslog can be seen below.

 Mar 14 08:42:17 xenial kernel: [ 257.120261] Call Trace:
Mar 14 08:42:17 xenial kernel: [ 257.120267] dump_stack+0x63/0x82
Mar 14 08:42:17 xenial kernel: [ 257.120271] dump_header+0x97/0x225
Mar 14 08:42:17 xenial kernel: [ 257.120276] ? security_capable_noaudit+0x4b/0x70
Mar 14 08:42:17 xenial kernel: [ 257.120277] oom_kill_process+0x219/0x420
Mar 14 08:42:17 xenial kernel: [ 257.120279] out_of_memory+0x11d/0x4b0
Mar 14 08:42:17 xenial kernel: [ 257.120282] __alloc_pages_slowpath+0xd32/0xe10
Mar 14 08:42:17 xenial kernel: [ 257.120284] __alloc_pages_nodemask+0x263/0x280
Mar 14 08:42:17 xenial kernel: [ 257.120289] alloc_pages_vma+0x88/0x1e0
Mar 14 08:42:17 xenial kernel: [ 257.120292] shmem_alloc_page+0x70/0xc0
Mar 14 08:42:17 xenial kernel: [ 257.120295] ? __radix_tree_insert+0x45/0x230
Mar 14 08:42:17 xenial kernel: [ 257.120297] shmem_alloc_and_acct_page+0x73/0x1b0
Mar 14 08:42:17 xenial kernel: [ 257.120298] ? find_get_entry+0x1e/0xe0
Mar 14 08:42:17 xenial kernel: [ 257.120300] shmem_getpage_gfp+0x18f/0xbf0
Mar 14 08:42:17 xenial kernel: [ 257.120303] shmem_fault+0x9d/0x1e0
Mar 14 08:42:17 xenial kernel: [ 257.120306] ? file_update_time+0x60/0x110
Mar 14 08:42:17 xenial kernel: [ 257.120309] __do_fault+0x24/0xd0
Mar 14 08:42:17 xenial kernel: [ 257.120311] __handle_mm_fault+0x983/0x1080
Mar 14 08:42:17 xenial kernel: [ 257.120314] ? set_next_entity+0xd9/0x220
Mar 14 08:42:17 xenial kernel: [ 257.120316] handle_mm_fault+0xcc/0x1c0
Mar 14 08:42:17 xenial kernel: [ 257.120319] __do_page_fault+0x258/0x4f0
Mar 14 08:42:17 xenial kernel: [ 257.120320] do_page_fault+0x22/0x30
Mar 14 08:42:17 xenial kernel: [ 257.120323] ? page_fault+0x36/0x60
Mar 14 08:42:17 xenial kernel: [ 257.120325] page_fault+0x4c/0x60
Mar 14 08:42:17 xenial kernel: [ 257.120327] RIP: 0033:0x4493ad
Mar 14 08:42:17 xenial kernel: [ 257.120328] RSP: 002b:00007ffd3ff927b0 EFLAGS: 00010286
Mar 14 08:42:17 xenial kernel: [ 257.120329] RAX: 00000000c6a6a9e7 RBX: 00000001a8ae9d07 RCX: 0000000021b83c00
Mar 14 08:42:17 xenial kernel: [ 257.120330] RDX: 00007f2ad911d000 RSI: 00000000271ea9e7 RDI: 00000000271e9660
Mar 14 08:42:17 xenial kernel: [ 257.120331] RBP: 00007f2adab70000 R08: 000000001bda3103 R09: 0000000048373eca
Mar 14 08:42:17 xenial kernel: [ 257.120331] R10: 0000000048373eca R11: 00007f2acab70000 R12: 0000000000000000
Mar 14 08:42:17 xenial kernel: [ 257.120332] R13: 8309310348373eca R14: 56b63c1fe6166568 R15: 00007f2acab70000
Mar 14 08:42:17 xenial kernel: [ 257.120333] Mem-Info:
Mar 14 08:42:17 xenial kernel: [ 257.120337] active_anon:249563 inactive_anon:83607 isolated_anon:65
Mar 14 08:42:17 xenial kernel: [ 257.120337] active_file:54 inactive_file:74 isolated_file:0
Mar 14 08:42:17 xenial kernel: [ 257.120337] unevictable:2561 dirty:0 writeback:68 unstable:0
Mar 14 08:42:17 xenial kernel: [ 257.120337] slab_reclaimable:10707 slab_unreclaimable:16622
Mar 14 08:42:17 xenial kernel: [ 257.120337] mapped:132260 shmem:332303 pagetables:2888 bounce:0
Mar 14 08:42:17 xenial kernel: [ 257.120337] free:12983 free_pcp:493 free_cma:0
Mar 14 08:42:17 xenial kernel: [ 257.120340] Node 0 active_anon:998252kB inactive_anon:334428kB active_file:216kB inactive_file:296kB unevictable:10244kB isolated(anon):260kB isolated(file):0kB mapped:529040kB dirty:0kB writeback:272kB shmem:1329212kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
Mar 14 08:42:17 xenial kernel: [ 257.120341] Node 0 DMA free:7404kB min:392kB low:488kB high:584kB active_anon:5616kB inactive_anon:2384kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:0kB pagetables:64kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Mar 14 08:42:17 xenial kernel: [ 257.120345] lowmem_reserve[]: 0 1754 1754 1754 1754
Mar 14 08:42:17 xenial kernel: [ 257.120347] Node 0 DMA32 free:44528kB min:44660kB low:55824kB high:66988kB active_anon:992772kB inactive_anon:331896kB active_file:216kB inactive_file:296kB unevictable:10244kB writepending:600kB present:2080704kB managed:1818152kB mlocked:10244kB kernel_stack:4512kB pagetables:11488kB bounce:0kB free_pcp:1972kB local_pcp:236kB free_cma:0kB
Mar 14 08:42:17 xenial kernel: [ 257.120351] lowmem_reserve[]: 0 0 0 0 0
Mar 14 08:42:17 xenial kernel: [ 257.120353] Node 0 DMA: 1*4kB (U) 9*8kB (UME) 10*16kB (UME) 4*32kB (UM) 4*64kB (UM) 5*128kB (UME) 4*256kB (UM) 2*512kB (ME) 2*1024kB (ME) 1*2048kB (E) 0*4096kB = 7404kB
Mar 14 08:42:17 xenial kernel: [ 257.120363] Node 0 DMA32: 12*4kB (UME) 175*8kB (UE) 138*16kB (UE) 91*32kB (UM) 27*64kB (UM) 49*128kB (UM) 19*256kB (UM) 8*512kB (M) 21*1024kB (M) 0*2048kB 0*4096kB = 45032kB

Revision history for this message
Ubuntu Kernel Bot (ubuntu-kernel-bot) wrote : Missing required logs.

This bug is missing log files that will aid in diagnosing the problem. While running an Ubuntu kernel (not a mainline or third-party kernel) please enter the following command in a terminal window:

apport-collect 1756129

and then change the status of the bug to 'Confirmed'.

If, due to the nature of the issue you have encountered, you are unable to run this command, please add a comment stating that fact and change the bug status to 'Confirmed'.

This change has been made by an automated script, maintained by the Ubuntu Kernel Team.

Changed in linux (Ubuntu):
status: New → Incomplete
tags: added: bionic
Chris Valean (cvalean)
summary: [Hyper-V] Dynamic Memory HotAdd memory demand increases very fast to
- maximum and logs show call trace messages
+ maximum
Changed in linux (Ubuntu):
status: Incomplete → Confirmed
Changed in linux (Ubuntu):
importance: Undecided → High
affects: linux (Ubuntu) → linux-azure (Ubuntu)
tags: added: kernel-da-key kernel-hyper-v
Ionut Lenghel (ilenghel)
description: updated
Revision history for this message
Chris Valean (cvalean) wrote :

This behavior is still seen even in 4.18.0.1002 proposed azure kernel for Cosmic 18.10 release (daily builds).

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.