The audit entry point expects a non-x32 syscall number in %esi. If the x32 bit is set, it will break horribly by indexing wrongly into the e->rule.mask[word] syscall bitmask. If we reuse the x32 mask (if the kernel is compiled with x32 support), we should be safe. This is already done at various points but syscall_trace_enter called later relies on the bit being set. Otherwise strace will be unaware of the x32 mode and report syscall_$(512+x) instead of the real syscalls.
I think this should fix it:
% diff -Naur /tmp/entry_ 64.S.orig arch/x86/ kernel/ entry_64. S 64.S.orig 2014-05-23 16:12:58.136925093 +0200 kernel/ entry_64. S 2014-05-23 16:13:12.229077570 +0200 MASK,%esi ARCH_X86_ 64,%edi /* 1st arg: audit arch */ syscall_ entry
--- /tmp/entry_
+++ arch/x86/
@@ -699,6 +699,9 @@
movq %rsi,%rcx /* 4th arg: 2nd syscall arg */
movq %rdi,%rdx /* 3rd arg: 1st syscall arg */
movq %rax,%rsi /* 2nd arg: syscall number */
+#if __SYSCALL_MASK != ~0
+ andl $__SYSCALL_
+#endif
movl $AUDIT_
call __audit_
LOAD_ARGS 0 /* reload call-clobbered registers */
The audit entry point expects a non-x32 syscall number in %esi. If the x32 bit is set, it will break horribly by indexing wrongly into the e->rule.mask[word] syscall bitmask. If we reuse the x32 mask (if the kernel is compiled with x32 support), we should be safe. This is already done at various points but syscall_trace_enter called later relies on the bit being set. Otherwise strace will be unaware of the x32 mode and report syscall_$(512+x) instead of the real syscalls.