(In reply to comment #65) > We believe this issue is resolved in Artful (17.10), could you please > validate before we close this bug? Issue is observed even on 17.10. md.unit=kdump-tools.service ata_piix.prefer_ms_hyperv=0 elfcorehdr=157184K * loaded kdump kernel root@ltc-firep3:~# kdump-config show DUMP_MODE: kdump USE_KDUMP: 1 KDUMP_SYSCTL: kernel.panic_on_oops=1 KDUMP_COREDIR: /var/crash crashkernel addr: /var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinux-4.13.0-36-generic kdump initrd: /var/lib/kdump/initrd.img: symbolic link to /var/lib/kdump/initrd.img-4.13.0-36-generic current state: ready to kdump kexec command: /sbin/kexec -p --command-line="root=UUID=6d6f8d6e-ccb9-49e7-b260-c2e1e3bca3ab ro quiet splash irqpoll nr_cpus=1 nousb systemd.unit=kdump-tools.service ata_piix.prefer_ms_hyperv=0" --initrd=/var/lib/kdump/initrd.img /var/lib/kdump/vmlinuz root@ltc-firep3:~# stress-ng -a 0 stress-ng: info: [3578] rdrand stressor will be skipped, CPU does not support the rdrand instruction. stress-ng: info: [3578] tsc stressor will be skipped, CPU does not support the tsc instruction. stress-ng: info: [3578] disabled 'bind-mount' as it may hang the machine (enable it with the --pathological option) stress-ng: info: [3578] disabled 'cpu-online' as it may hang the machine (enable it with the --pathological option) stress-ng: info: [3578] disabled 'oom-pipe' as it may hang the machine (enable it with the --pathological option) stress-ng: info: [3578] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor stress-ng: info: [3578] dispatching hogs: 160 af-alg, 160 affinity, 160 aio, 160 aiol, 160 apparmor, 160 atomic, 160 bigheap, 160 branch, 160 brk, 160 bsearch, 160 cache, 160 cap, 160 chdir, 160 chmod, 160 chown, 160 chroot, 160 clock, 160 clone, 160 context, 160 copy-file, 160 cpu, 160 crypt, 160 cyclic, 160 daemon, 160 dccp, 160 dentry, 160 dev, 160 dir, 160 dirdeep, 160 dnotify, 160 dup, 160 epoll, 160 eventfd, 160 exec, 160 fallocate, 160 fanotify, 160 fault, 160 fcntl, 160 fiemap, 160 fifo, 160 filename, 160 flock, 160 fork, 160 fp-error, 160 fstat, 160 full, 160 futex, 160 get, 160 getdent, 160 getrandom, 160 handle, 160 hdd, 160 heapsort, 160 hsearch, 160 icache, 160 icmp-flood, 160 inode-flags, 160 inotify, 160 io, 160 iomix, 160 ioprio, 160 itimer, 160 kcmp, 160 key, 160 kill, 160 klog, 160 lease, 160 link, 160 locka, 160 lockbus, 160 lockf, 160 lockofd, 160 longjmp, 160 lsearch, 160 madvise, 160 malloc, 160 matrix, 160 membarrier, 160 memcpy, 160 memfd, 160 memrate, 160 memthrash, 160 mergesort, 160 mincore, 160 mknod, 160 mlock, 160 mmap, 160 mmapfork, 160 mmapmany, 160 mq, 160 mremap, 160 msg, 160 msync, 160 netdev, 160 netlink-proc, 160 nice, 160 nop, 160 null, 160 numa, 160 opcode, 160 open, 160 personality, 160 pipe, 160 poll, 160 procfs, 160 pthread, 160 ptrace, 160 pty, 160 qsort, 160 quota, 160 radixsort, 160 readahead, 160 remap, 160 rename, 160 resources, 160 rlimit, 160 rmap, 160 rtc, 160 schedpolicy, 160 sctp, 160 seal, 160 seccomp, 160 seek, 160 sem, 160 sem-sysv, 160 sendfile, 160 shm, 160 shm-sysv, 160 sigfd, 160 sigfpe, 160 sigpending, 160 sigq, 160 sigsegv, 160 sigsuspend, 160 sleep, 160 sock, 160 sockdiag, 160 sockfd, 160 sockpair, 160 softlockup, 160 spawn, 160 splice, 160 stack, 160 stackmmap, 160 str, 160 stream, 160 swap, 160 switch, 160 symlink, 160 sync-file, 160 sysfs, 160 sysinfo, 160 tee, 160 timer, 160 timerfd, 160 tlb-shootdown, 160 tmpfs, 160 tsearch, 160 udp, 160 udp-flood, 160 unshare, 160 urandom, 160 userfaultfd, 160 utime, 160 vecmath, 160 vfork, 160 vforkmany, 160 vm, 160 vm-rw, 160 vm-splice, 160 wait, 160 wcs, 160 xattr, 160 yield, 160 zero, 160 zlib, 160 zombie stress-ng: info: [3578] cache allocate: using built-in defaults as unable to determine cache details stress-ng: info: [3610] stress-ng-cyclic: for best results, run just 1 instance of this stressor stress-ng: info: [3623] stress-ng-dirdeep: 60987043 inodes available, exercising up to 60987043 inodes stress-ng: info: [3639] stress-ng-exec: running as root, won't run test. [ 190.350568] AppArmor DFA next/check upper bounds error [ 190.352846] AppArmor DFA next/check upper bounds error stress-ng: info: [3743] stress-ng-lockbus: this stressor is not implemented on this system: ppc64le Linux 4.13.0-36-generic stress-ng: info: [3946] stress-ng-numa: system has 2 of a maximum 256 memory NUMA nodes stress-ng: fail: [4051] stress-ng-quota: quotactl command Q_GETQUOTA failed: errno=3 (No such process) stress-ng: fail: [4051] stress-ng-quota: quotactl command Q_GETFMT failed: errno=3 (No such process) stress-ng: fail: [4051] stress-ng-quota: quotactl command Q_GETINFO failed: errno=3 (No such process) stress-ng: fail: [4140] stress-ng-rtc: ioctl RTC_ALRM_READ failed, errno=22 (Invalid argument) stress-ng: fail: [3719] stress-ng-key: keyctl KEYCTL_UPDATE failed, errno=127 (Key has expired) stress-ng: fail: [3719] stress-ng-key: keyctl KEYCTL_READ failed, errno=127 (Key has expired) stress-ng: fail: [3719] stress-ng-key: request_key failed, errno=126 (Required key not available) stress-ng: fail: [3719] stress-ng-key: keyctl KEYCTL_DESCRIBE failed, errno=127 (Key has expired) stress-ng: fail: [3719] stress-ng-key: keyctl KEYCTL_UPDATE failed, errno=127 (Key has expired) info: 5 failures reached, aborting stress process [ 191.335947] AppArmor DFA next/check upper bounds error [ 191.397177] AppArmor DFA next/check upper bounds error stress-ng: info: [4614] stress-ng-spawn: running as root, won't run test. stress-ng: info: [4720] stress-ng-stream: using built-in defaults as unable to determine cache details stress-ng: info: [4720] stress-ng-stream: stressor loosely based on a variant of the STREAM benchmark code stress-ng: info: [4720] stress-ng-stream: do NOT submit any of these results to the STREAM benchmark results stress-ng: info: [4720] stress-ng-stream: Using CPU cache size of 2048K [ 191.645947] AppArmor DFA next/check upper bounds error stress-ng: info: [4865] stress-ng-sysfs: running as root, just traversing /sys and not read/writing to /sys files. [ 192.335432] Memory failure: 0xc0190: recovery action for dirty LRU page: Recovered [ 192.880349] Memory failure: 0xc0193: recovery action for dirty LRU page: Recovered stress-ng: info: [5568] stress-ng-yield: limiting to 160 yielders (instance 0) [ 196.195563] sysrq: SysRq : Trigger a crash [ 196.195835] Unable to handle kernel paging request for data at address 0x00000000 [ 196.196633] Faulting instruction address: 0xc000000000793948 [ 196.196724] Oops: Kernel access of bad area, sig: 11 [#1] [ 196.196823] SMP NR_CPUS=2048 [ 196.196844] NUMA [ 196.196915] PowerNV [ 196.197040] Modules linked in: tgr192 wp512 rmd320 rmd256 unix_diag rmd160 sctp libcrc32c rmd128 md4 binfmt_misc dccp_ipv4 algif_hash dccp af_alg idt_89hpesx joydev input_leds mac_hid ofpart cmdlinepart powernv_flash at24 mtd uio_pdrv_genirq ipmi_powernv powernv_rng uio ibmpowernv vmx_crypto ipmi_devintf ipmi_msghandler opal_prd crct10dif_vpmsum ip_tables x_tables autofs4 hid_generic usbhid hid uas usb_storage ast i2c_algo_bit ttm crc32c_vpmsum drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm ahci libahci tg3 [ 196.198435] CPU: 34 PID: 3479 Comm: bash Not tainted 4.13.0-36-generic #40-Ubuntu [ 196.198530] task: c000000fd7c29b00 task.stack: c000000fd6018000 [ 196.198614] NIP: c000000000793948 LR: c000000000794878 CTR: c000000000793920 [ 196.198711] REGS: c000000fd601b9f0 TRAP: 0300 Not tainted (4.13.0-36-generic) [ 196.198922] MSR: 9000000000009033 [ 196.199114] CR: 28422222 XER: 20000000 [ 196.199222] CFAR: c00000000000878c DAR: 0000000000000000 DSISR: 42000000 SOFTE: 1 [ 196.199222] GPR00: c000000000794878 c000000fd601bc70 c0000000015f6200 0000000000000063 [ 196.199222] GPR04: c000000ffa48ade8 c000000ffa4a2068 9000000000009033 0000000000000032 [ 196.199222] GPR08: 0000000000000007 0000000000000001 0000000000000000 9000000000001003 [ 196.199222] GPR12: c000000000793920 c000000007a67600 0000000010180df8 0000000010189e30 [ 196.199222] GPR16: 0000000010189ea8 0000000010151210 000000001018bd58 000000001018de48 [ 196.199222] GPR20: 000001003b9701d8 0000000000000001 0000000010164590 0000000010163bb0 [ 196.199222] GPR24: 00007ffffaab70f4 00007ffffaab70f0 c0000000014fa770 0000000000000002 [ 196.199222] GPR28: 0000000000000063 0000000000000004 c0000000014824f4 c0000000014fab10 [ 196.201896] NIP [c000000000793948] sysrq_handle_crash+0x28/0x30 [ 196.201985] LR [c000000000794878] __handle_sysrq+0xf8/0x2b0 [ 196.202895] Call Trace: [ 196.202938] [c000000fd601bc70] [c000000000794858] __handle_sysrq+0xd8/0x2b0 (unreliable) [ 196.203054] [c000000fd601bd10] [c000000000795074] write_sysrq_trigger+0x64/0x90 [ 196.203151] [c000000fd601bd40] [c000000000450468] proc_reg_write+0x88/0xd0 [ 196.203235] [c000000fd601bd70] [c0000000003a1f6c] __vfs_write+0x3c/0x70 [ 196.203367] [c000000fd601bd90] [c0000000003a3ba8] vfs_write+0xd8/0x220 [ 196.203501] [c000000fd601bde0] [c0000000003a5a28] SyS_write+0x68/0x110 [ 196.203594] [c000000fd601be30] [c00000000000b184] system_call+0x58/0x6c [ 196.203681] Instruction dump: [ 196.203734] 4bfff9f1 4bfffe50 3c4c00e6 384228e0 7c0802a6 60000000 39200001 3d42001d [ 196.203946] 394adb30 912a0000 7c0004ac 39400000 <992a0000> 4e800020 3c4c00e6 384228b0 [ 196.204221] ---[ end trace c2e83d4780c5d8dd ]--- [ 196.284422] [ 196.284791] Sending IPI to other CPUs [ 207.391786] ERROR: 19 cpu(s) not responding [ 217.392637] kexec: waiting for cpu 2 (physical 10) to enter OPAL [ 218.394232] kexec: timed out waiting for cpu 2 (physical 10) to enter OPAL [ 218.394456] kexec: waiting for cpu 7 (physical 15) to enter OPAL [ 219.396003] kexec: timed out waiting for cpu 7 (physical 15) to enter OPAL [ 219.396233] kexec: waiting for cpu 16 (physical 24) to enter OPAL [ 220.398083] kexec: timed out waiting for cpu 16 (physical 24) to enter OPAL [ 220.398329] kexec: waiting for cpu 29 (physical 45) to enter OPAL [ 221.401214] kexec: timed out waiting for cpu 29 (physical 45) to enter OPAL [ 221.401470] kexec: waiting for cpu 42 (physical 74) to enter OPAL [ 222.405561] kexec: timed out waiting for cpu 42 (physical 74) to enter OPAL [ 222.405794] kexec: waiting for cpu 45 (physical 77) to enter OPAL [ 223.409855] kexec: timed out waiting for cpu 45 (physical 77) to enter OPAL [ 223.410108] kexec: waiting for cpu 56 (physical 96) to enter OPAL [ 224.415029] kexec: timed out waiting for cpu 56 (physical 96) to enter OPAL [ 224.415273] kexec: waiting for cpu 60 (physical 100) to enter OPAL [ 225.420345] kexec: timed out waiting for cpu 60 (physical 100) to enter OPAL [ 225.420578] kexec: waiting for cpu 61 (physical 101) to enter OPAL [ 226.425688] kexec: timed out waiting for cpu 61 (physical 101) to enter OPAL [ 226.425988] kexec: waiting for cpu 62 (physical 102) to enter OPAL [ 227.431092] kexec: timed out waiting for cpu 62 (physical 102) to enter OPAL [ 227.431339] kexec: waiting for cpu 65 (physical 105) to enter OPAL [ 228.436631] kexec: timed out waiting for cpu 65 (physical 105) to enter OPAL [ 228.437203] kexec: waiting for cpu 84 (physical 1052) to enter OPAL [ 229.465219] kexec: timed out waiting for cpu 84 (physical 1052) to enter OPAL [ 229.465561] kexec: waiting for cpu 88 (physical 1056) to enter OPAL [ 230.493736] kexec: timed out waiting for cpu 88 (physical 1056) to enter OPAL [ 230.494022] kexec: waiting for cpu 90 (physical 1058) to enter OPAL [ 231.522156] kexec: timed out waiting for cpu 90 (physical 1058) to enter OPAL [ 231.522700] kexec: waiting for cpu 101 (physical 1069) to enter OPAL [ 232.551499] kexec: timed out waiting for cpu 101 (physical 1069) to enter OPAL [ 232.552046] kexec: waiting for cpu 112 (physical 1096) to enter OPAL [ 233.581776] kexec: timed out waiting for cpu 112 (physical 1096) to enter OPAL [ 233.582209] kexec: waiting for cpu 119 (physical 1103) to enter OPAL [[ 294.370123276,5] OPAL: Switch to big-endian OS [ 295.370243186,3] OPAL: CPU 0xa not in OPAL ! 234.612219] kexec: timed o 4.04035|Ignoring boot flags, incorrect version 0x0 4.10333|ISTEP 6. 3 4.58052|ISTEP 6. 4 4.58097|ISTEP 6. 5 16.35927|HWAS|PRESENT> DIMM[03]=AAAAAAAAAAAAAAAA 16.35928|HWAS|PRESENT> Membuf[04]=CCCC000000000000 16.35928|HWAS|PRESENT> Proc[05]=C000000000000000 16.45967|ISTEP 6. 6 17.54322|================================================ 17.54323|Error reported by unknown (0xE500) 17.54323| 17.54323| ModuleId 0x0b unknown 17.54323| ReasonCode 0xe540 unknown 17.54324| UserData1 unknown : 0x0006000000000101 17.54324| UserData2 unknown : 0xc8e9003600000000 17.54324|User Data Section 0, type UD 17.54325| Subsection type 0x06 17.54325| ComponentId errl (0x0100) 17.54325| CALLOUT 17.54325| PROCEDURE ERROR 17.54326| Procedure: 16 17.54326|User Data Section 1, type UD 17.54326| Subsection type 0x04 17.54327| ComponentId errl (0x0100) 17.54327|User Data Section 2, type UD 17.54327| Subsection type 0x06 17.54327| ComponentId errl (0x0100) 17.54328| CALLOUT 17.54328| HW CALLOUT 17.54328| Reporting CPU ID: 15 17.54328| Called out entity: 17.54329|User Data Section 3, type UD 17.54329| Subsection type 0x33 17.54329| ComponentId unknown (0xe500) 17.54330|User Data Section 4, type UD 17.54330| Subsection type 0x01 17.54330| ComponentId unknown (0xe500) 17.54331| STRING 17.54331| 17.54331|User Data Section 5, type UD 17.54331| Subsection type 0x15 17.54332| ComponentId hb-trace (0x3100) 17.54332|User Data Section 6, type UD 17.54332| Subsection type 0x01 17.54333| ComponentId unknown (0xe500) 17.54333| STRING 17.54333| 17.54334|User Data Section 7, type UD 17.54334| Subsection type 0x03 17.54334| ComponentId errl (0x0100) 17.54335|User Data Section 8, type UD 17.54335| Subsection type 0x01 17.54335| ComponentId errl (0x0100) 17.54335| STRING 17.54336| Hostboot Build ID: hostboot-2eb7706-f28ad92/hbicore.bin 17.54336|User Data Section 9, type UD 17.54336| Subsection type 0x04 17.54337| ComponentId errl (0x0100) 17.54337|================================================