Unfortunately I am still unable to reproduce the problem. I tried using a container and a VM, to no avail.
But I did open the coredump:
(gdb) bt
#0 _int_free (av=av@entry=0x7fcbaccd8b80 <main_arena>, p=p@entry=0x558afb81e0c0, have_lock=<optimized out>, have_lock@entry=1) at malloc.c:4341
#1 0x00007fcbacb84f22 in _int_realloc (av=av@entry=0x7fcbaccd8b80 <main_arena>, oldp=oldp@entry=0x558afb81e070, oldsize=oldsize@entry=8208, nb=80) at malloc.c:4644
#2 0x00007fcbacb86fb6 in __GI___libc_realloc (oldmem=0x558afb81e080, bytes=64) at malloc.c:3226
#3 0x00007fcbacb77748 in _IO_mem_finish (fp=0x558afb805e80, dummy=<optimized out>) at memstream.c:131
#4 0x00007fcbacb6de41 in _IO_new_fclose (fp=fp@entry=0x558afb805e80) at libioP.h:948
#5 0x00007fcbacc03ddb in __vsyslog_internal (pri=<optimized out>, fmt=0x558afa13ac80 "%.500s", ap=0x7ffd8170e5c0, mode_flags=2) at ../misc/syslog.c:237
#6 0x00007fcbacc04363 in __syslog_chk (pri=pri@entry=7, flag=flag@entry=1, fmt=fmt@entry=0x558afa13ac80 "%.500s") at ../misc/syslog.c:136
#7 0x0000558afa0f8b78 in syslog (__fmt=0x558afa13ac80 "%.500s", __pri=7) at /usr/include/x86_64-linux-gnu/bits/syslog.h:31
#8 do_log (level=level@entry=SYSLOG_LEVEL_DEBUG1, fmt=<optimized out>, args=args@entry=0x7ffd8170ef00) at ../../log.c:476
#9 0x0000558afa0f8ff8 in debug (fmt=<optimized out>) at ../../log.c:229
#10 0x0000558afa0ae3fe in server_accept_loop (config_s=0x7ffd8170f050, newsock=<synthetic pointer>, sock_out=<synthetic pointer>, sock_in=<synthetic pointer>) at ../../sshd.c:1338
#11 main (ac=<optimized out>, av=<optimized out>) at ../../sshd.c:2040
This stack trace is a bit intriguing. It's realloc that is crashing, way too deep into glibc. It seems to point at some weird interaction between your setup and the memory management involved in syslog.
I spent time trying to find upstream bugs to see if there was anything remotely similar, but couldn't find anything either. Can you provide more details on the setup you're using to reproduce the problem? For example, are you using a VM, a container, bare metal? How many (v)CPUs? What about memory? If it's a container/VM, what's the underlying host?
Also, since you can reproduce the issue pretty reliably, could you perhaps check if the same crash happens on Jammy?
Thank you for providing more information.
Unfortunately I am still unable to reproduce the problem. I tried using a container and a VM, to no avail.
But I did open the coredump:
(gdb) bt entry=0x7fcbacc d8b80 <main_arena>, p=p@entry= 0x558afb81e0c0, have_lock= <optimized out>, have_lock@entry=1) at malloc.c:4341 entry=0x7fcbacc d8b80 <main_arena>, oldp=oldp@ entry=0x558afb8 1e070, oldsize= oldsize@ entry=8208, nb=80) at malloc.c:4644 0x558afb81e080, bytes=64) at malloc.c:3226 entry=0x558afb8 05e80) at libioP.h:948 syslog. c:237 entry=0x558afa1 3ac80 "%.500s") at ../misc/ syslog. c:136 0x558afa13ac80 "%.500s", __pri=7) at /usr/include/ x86_64- linux-gnu/ bits/syslog. h:31 level@entry= SYSLOG_ LEVEL_DEBUG1, fmt=<optimized out>, args=args@ entry=0x7ffd817 0ef00) at ../../log.c:476 s=0x7ffd8170f05 0, newsock=<synthetic pointer>, sock_out=<synthetic pointer>, sock_in=<synthetic pointer>) at ../../sshd.c:1338
#0 _int_free (av=av@
#1 0x00007fcbacb84f22 in _int_realloc (av=av@
#2 0x00007fcbacb86fb6 in __GI___libc_realloc (oldmem=
#3 0x00007fcbacb77748 in _IO_mem_finish (fp=0x558afb805e80, dummy=<optimized out>) at memstream.c:131
#4 0x00007fcbacb6de41 in _IO_new_fclose (fp=fp@
#5 0x00007fcbacc03ddb in __vsyslog_internal (pri=<optimized out>, fmt=0x558afa13ac80 "%.500s", ap=0x7ffd8170e5c0, mode_flags=2) at ../misc/
#6 0x00007fcbacc04363 in __syslog_chk (pri=pri@entry=7, flag=flag@entry=1, fmt=fmt@
#7 0x0000558afa0f8b78 in syslog (__fmt=
#8 do_log (level=
#9 0x0000558afa0f8ff8 in debug (fmt=<optimized out>) at ../../log.c:229
#10 0x0000558afa0ae3fe in server_accept_loop (config_
#11 main (ac=<optimized out>, av=<optimized out>) at ../../sshd.c:2040
This stack trace is a bit intriguing. It's realloc that is crashing, way too deep into glibc. It seems to point at some weird interaction between your setup and the memory management involved in syslog.
I spent time trying to find upstream bugs to see if there was anything remotely similar, but couldn't find anything either. Can you provide more details on the setup you're using to reproduce the problem? For example, are you using a VM, a container, bare metal? How many (v)CPUs? What about memory? If it's a container/VM, what's the underlying host?
Also, since you can reproduce the issue pretty reliably, could you perhaps check if the same crash happens on Jammy?
Thank you.