Comment 9 for bug 1302605

Revision history for this message
Philipp Kern (pkern) wrote :

I think this should fix it:

 % diff -Naur /tmp/entry_64.S.orig arch/x86/kernel/entry_64.S
--- /tmp/entry_64.S.orig 2014-05-23 16:12:58.136925093 +0200
+++ arch/x86/kernel/entry_64.S 2014-05-23 16:13:12.229077570 +0200
@@ -699,6 +699,9 @@
  movq %rsi,%rcx /* 4th arg: 2nd syscall arg */
  movq %rdi,%rdx /* 3rd arg: 1st syscall arg */
  movq %rax,%rsi /* 2nd arg: syscall number */
+#if __SYSCALL_MASK != ~0
+ andl $__SYSCALL_MASK,%esi
+#endif
  movl $AUDIT_ARCH_X86_64,%edi /* 1st arg: audit arch */
  call __audit_syscall_entry
  LOAD_ARGS 0 /* reload call-clobbered registers */

The audit entry point expects a non-x32 syscall number in %esi. If the x32 bit is set, it will break horribly by indexing wrongly into the e->rule.mask[word] syscall bitmask. If we reuse the x32 mask (if the kernel is compiled with x32 support), we should be safe. This is already done at various points but syscall_trace_enter called later relies on the bit being set. Otherwise strace will be unaware of the x32 mode and report syscall_$(512+x) instead of the real syscalls.