On ther per guest qemu log one always can see: 2017-03-07 08:59:35.748+0000: shutting down Then libvirt throws Mär 07 09:59:35 lap libvirtd[3023]: internal error: End of file from monitor It seems even on a clean shutdown I see before a kill signal like: qemu-system-x86_64: terminating on signal 15 from pid 3023 So might that be "normal" if stopped like this. There are several stop options for "virsh shutdown" Shutdown methods: "acpi", "agent", "initctl", "signal" and "paravirt" And then one can also just shutdown from inside the guest. Notation is "error log msg / kill signal", so Y/Y means both happened. acpi: Y/Y agent: Y/Y shutdown -h 0 inside guest: Y/Y initctl not-supported for qemu signal: not-supported for qemu paravirt: not-supported for qemu So the "mode" doesn't matter - which at least tells us it is code a bit outside the actual shutdown. The signal comes from "/usr/sbin/libvirtd" itself. And since it does so independent to the shutdown method it seems to be some sort of generic "cleanup". Lets find that in the code. There are three kills in an strace of the shutdown action: (some context to find these later) 0.000020 write(22, " 0.000032 fsync(22) = 0 <0.000006> 0.000018 close(22) = 0 <0.000007> 0.000020 rename("/var/run/libvirt/qemu/zesty-test-log.xml.new", "/var/run/libvirt/qemu/zesty-test-log.xml") = 0 <0.000024 0.000039 unlink("/var/run/libvirt/qemu/zesty-test-log.xml.new") = -1 ENOENT (No such file or directory) <0.000009> XX 0.000027 kill(11836, SIGTERM) = 0 <0.000010> 0.000031 poll([{fd=6, events=POLLIN}, {fd=8, events=POLLIN}, {fd=11, events=POLLIN}, {fd=12, events=POLLIN}, {fd=13, even 0.000113 read(26, "{\"timestamp\": {\"seconds\": 148887"..., 1023) = 81 <0.000012> 0.000035 close(22) = 0 <0.000044> 0.000060 unlink("/var/lib/libvirt/qemu/channel/target/domain-zesty-test-log/org.qemu.guest_agent.0") = -1 ENOENT (No such XX 0.000032 kill(11836, SIGTERM) = 0 <0.000013> 0.000029 nanosleep({0, 200000000}, NULL) = 0 <0.200086> XX 0.200115 kill(11836, SIG_0) = -1 ESRCH (No such process) <0.000007> 0.000049 open("/dev/null", O_RDWR|O_CLOEXEC) = 22 <0.000020> 0.000045 pipe2([23, 24], O_CLOEXEC) = 0 <0.000010> 0.000027 pipe2([25, 26], O_CLOEXEC) = 0 <0.000009> 0.000021 rt_sigprocmask(SIG_SETMASK, ~[RTMIN RT_1], [ALRM], 8) = 0 <0.000007> Install libvirt0-dbg and break on those when called from the daemon. Found the "kill" that does this at with a backtrace like: #0 kill () at ../sysdeps/unix/syscall-template.S:84 #1 0x00007f3511d3014a in virProcessKill (pid=, sig=) at ../../../src/util/virprocess.c:333 #2 0x00007f34ea693928 in qemuProcessKill (vm=vm@entry=0x7f34dc004f60, flags=flags@entry=2) at ../../../src/qemu/qemu_process.c:5213 #3 0x00007f34ea693a0d in qemuProcessShutdownOrReboot (driver=, vm=0x7f34dc004f60) at ../../../src/qemu/qemu_process.c:612 #4 0x00007f34ea693b95 in qemuProcessHandleShutdown (mon=, vm=0x7f34dc004f60, opaque=0x7f34e03b3310) at ../../../src/qemu/qemu_process.c:686 #5 0x00007f34ea6a9682 in qemuMonitorEmitShutdown (mon=0x7f34c00170e0) at ../../../src/qemu/qemu_monitor.c:1232 #6 0x00007f34ea6bda72 in qemuMonitorJSONIOProcessEvent (obj=0x55d05b5c8fa0, mon=0x7f34c00170e0) at ../../../src/qemu/qemu_monitor_json.c:173 #7 qemuMonitorJSONIOProcessLine (msg=0x0, line=, mon=0x7f34c00170e0) at ../../../src/qemu/qemu_monitor_json.c:202 #8 qemuMonitorJSONIOProcess (mon=mon@entry=0x7f34c00170e0, data=0x55d05b625130 "{\"timestamp\": {\"seconds\": 1488879468, \"microseconds\": 318008}, \"event\": \"SHUTDOWN\"}\r\n", len=85, msg=msg@entry=0x0) at ../../../src/qemu/qemu_monitor_json.c:244 #9 0x00007f34ea6a874a in qemuMonitorIOProcess (mon=0x7f34c00170e0) at ../../../src/qemu/qemu_monitor.c:426 #10 qemuMonitorIO (watch=watch@entry=196, fd=, events=0, events@entry=1, opaque=opaque@entry=0x7f34c00170e0) at ../../../src/qemu/qemu_monitor.c:680 #11 0x00007f3511d00393 in virEventPollDispatchHandles (fds=, nfds=) at ../../../src/util/vireventpoll.c:509 #12 virEventPollRunOnce () at ../../../src/util/vireventpoll.c:658 #13 0x00007f3511cfec01 in virEventRunDefaultImpl () at ../../../src/util/virevent.c:314 #14 0x00007f3511e4ab4d in virNetDaemonRun (dmn=0x55d05b5ddb10) at ../../../src/rpc/virnetdaemon.c:701 #15 0x000055d059e9e407 in main (argc=, argv=) at ../../../daemon/libvirtd.c:1577 In qemuProcessHandleShutdown as a tail end cleanup (e.g. no matter if priv-agent) was called or not it is killing that way. I jumped the actual kill in gdb but that is the one place that makes the qemu process go away. If skipping it ends up stale (qemu there, libvirt thinks it is in shutdown) It seems that on the SIGTERM qemu closes the monitor file and this is "unexpected" by libvirt and reported as error. It should know that when it kills this is goging to become unavailable and stop complaining. I guess it is not detaching earlier in case e.g. shutdown goes sour and it would need the connection. But then how much would you expect to be still available after a SIGTERM. Debugging shows that this call comes from #0 virErrorMsg (error=error@entry=VIR_ERR_INTERNAL_ERROR, info=0x7fffdaafc710 "End of file from monitor") at ../../../src/util/virerror.c:830 #1 0x00007f3511cfe63b in virReportErrorHelper (domcode=domcode@entry=10, errorcode=errorcode@entry=1, filename=filename@entry=0x7f34ea71b3c8 "../../../src/qemu/qemu_monitor.c", funcname=funcname@entry=0x7f34ea71e858 <__FUNCTION__.31504> "qemuMonitorIO", linenr=linenr@entry=689, fmt=fmt@entry=0x7f34ea71ec88 "%s") at ../../../src/util/virerror.c:1414 #2 0x00007f34ea6a8820 in qemuMonitorIO (watch=watch@entry=210, fd=, events=, events@entry=9, opaque=opaque@entry=0x7f34c0002d00) at ../../../src/qemu/qemu_monitor.c:688 #3 0x00007f3511d00393 in virEventPollDispatchHandles (fds=, nfds=) at ../../../src/util/vireventpoll.c:509 #4 virEventPollRunOnce () at ../../../src/util/vireventpoll.c:658 #5 0x00007f3511cfec01 in virEventRunDefaultImpl () at ../../../src/util/virevent.c:314 #6 0x00007f3511e4ab4d in virNetDaemonRun (dmn=0x55d05b5ddb10) at ../../../src/rpc/virnetdaemon.c:701 #7 0x000055d059e9e407 in main (argc=, argv=) at ../../../daemon/libvirtd.c:1577 This is the common path of handling qemu Monitor, it knows nothing about that we are going to shut it down. Therefore when the guest goes away it throws that message. Also this is what makes it hard to search for it in search engines, as any crash that unexpectedly takes the qemu process away will report exactly this. Actually the code comes by the same context before (all in src/qemu/qemu_monitor.c and having a monitor pointer). When passing through "qemuMonitorEmitShutdown" it could set something like mon->shuttingdown = 1; And on the qemuMonitorIO check for !error it could check that flag. If set the virReportError should be converted to an INFO report (or none). If a patch as suggestion for upstream is easy enough I'll create one after lunch. But I need to test that on latest upstream from git first.