Simple stunnel DOS when opening and closing connections

Bug #327222 reported by Roman Fiedler
10
Affects Status Importance Assigned to Milestone
stunnel4 (Ubuntu)
New
Undecided
Unassigned

Bug Description

Binary package hint: stunnel4

Usually the stunnel4 process main process with lowest IP consumes 100% of CPU, TCP connections are accepted, but SSL handshake is not started. The test scenario below will make 10000 tests, but usually the test can be suspended after 500-1000 tests, stunnel is broken by then.

stunnel4 on hardy x86:
Description: Ubuntu 8.04.2
Release: 8.04

# apt-cache policy stunnel4
stunnel4:
  Installed: 3:4.21-1
  Candidate: 3:4.21-1
  Version table:
 *** 3:4.21-1 0
        500 http://security.ubuntu.com hardy/universe Packages
        100 /var/lib/dpkg/status

# stunnel4 -version
stunnel 4.21 on i486-pc-linux-gnu with OpenSSL 0.9.8g 19 Oct 2007
Threading:PTHREAD SSL:ENGINE Sockets:POLL,IPv6 Auth:LIBWRAP

Global options
debug = 5
pid = /var/run/stunnel4.pid
RNDbytes = 64
RNDfile = /dev/urandom
RNDoverwrite = yes

Service-level options
cert = /etc/stunnel/stunnel.pem
ciphers = AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH
key = /etc/stunnel/stunnel.pem
session = 300 seconds
sslVersion = SSLv3 for client, all for server
TIMEOUTbusy = 300 seconds
TIMEOUTclose = 60 seconds
TIMEOUTconnect = 10 seconds
TIMEOUTidle = 43200 seconds
verify = none

Test Scenario:

* Generate keys:

openssl req -new -newkey rsa:1024 -nodes -keyout server.key -days 3653 -x509 -out server.cert -subj "/CN=server"
openssl req -new -newkey rsa:1024 -nodes -keyout client.key -days 3653 -x509 -out client.cert -subj "/CN=client"

* Create config:

service = test tunnel
foreground = yes
# Debug warnings only
debug = 4

pid = /home/[username]/tmp/tunnel/tunnel.pid

cert = server.cert
key = server.key
verify = 3

[testany]
accept = 1234
exec = /home/[username]/tmp/tunnel/testcmd.sh
execargs = testcmd.sh
CAfile = client.cert

* Create testcmd.sh script:

#!/bin/bash
cat >> /tmp/dump

* Start tunnel in one shell

stunnel4 tunnel.cfg

* Start testscript in other:

#!/bin/bash
procCount=0
while [ "${procCount}" != "10000" ] ; do
  openssl s_client -key client.key -cert client.cert -connect localhost:1234 < /dev/null > /dev/null 2>&1 &
  let procCount=procCount+1
  if [ "${procCount#*00}" = "" ] ; then
    echo "Test: ${procCount}"
  fi
done
pkill -KILL -f "openssl s_client"

* When dead:

openssl s_client -key client.key -cert client.cert -connect localhost:1234
CONNECTED(00000003)

But no handshake

Revision history for this message
Roman Fiedler (roman-fiedler-deactivatedaccount) wrote :

When broken:

# ps aux | grep stunnel
rfiedler 14247 58.1 13.0 57592 33324 pts/2 Sl+ 16:21 18:05 stunnel4 tunnel.cfg
rfiedler 14248 0.0 0.2 3692 628 pts/2 S+ 16:21 0:00 stunnel4 tunnel.cfg
rfiedler 14249 0.0 0.2 3692 632 pts/2 S+ 16:21 0:00 stunnel4 tunnel.cfg
rfiedler 14250 0.0 0.2 3692 632 pts/2 S+ 16:21 0:00 stunnel4 tunnel.cfg
rfiedler 14251 0.0 0.2 3692 632 pts/2 S+ 16:21 0:00 stunnel4 tunnel.cfg
rfiedler 14252 0.0 0.2 3692 632 pts/2 S+ 16:21 0:00 stunnel4 tunnel.cfg

# ps auxH | grep stunnel | head
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:21 0:01 stunnel4 tunnel.cfg
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:33 0:00 stunnel4 tunnel.cfg
rfiedler 14247 89.8 13.0 57592 33324 pts/2 Rl+ 16:33 18:10 stunnel4 tunnel.cfg
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:33 0:00 stunnel4 tunnel.cfg
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:33 0:00 stunnel4 tunnel.cfg
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:33 0:00 stunnel4 tunnel.cfg
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:33 0:00 stunnel4 tunnel.cfg
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:33 0:00 stunnel4 tunnel.cfg
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:33 0:00 stunnel4 tunnel.cfg
rfiedler 14247 0.0 13.0 57592 33324 pts/2 Sl+ 16:33 0:00 stunnel4 tunnel.cfg
....
total 287 procs+threads

# netstat -tnp |head
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 1 0 127.0.0.1:63822 127.0.0.1:51305 CLOSE_WAIT 14247/stunnel4
tcp 1 0 127.0.0.1:63822 127.0.0.1:51655 CLOSE_WAIT 14247/stunnel4
tcp 1 0 127.0.0.1:63822 127.0.0.1:51632 CLOSE_WAIT 14247/stunnel4
tcp 119 0 127.0.0.1:63822 127.0.0.1:51626 CLOSE_WAIT 14247/stunnel4
tcp 1 0 127.0.0.1:63822 127.0.0.1:51615 CLOSE_WAIT 14247/stunnel4
tcp 119 0 127.0.0.1:63822 127.0.0.1:51665 CLOSE_WAIT 14247/stunnel4
tcp 119 0 127.0.0.1:63822 127.0.0.1:51468 CLOSE_WAIT 14247/stunnel4
....
280 entries

# gdb --pid 14247
..
(gdb) bt
#0 0xb7f8e410 in __kernel_vsyscall ()
#1 0xb7d59c07 in poll () from /lib/tls/i686/cmov/libc.so.6
#2 0x0805445f in ?? ()
#3 0x08057dbf in ?? ()
#4 0x080582e4 in ?? ()
#5 0xb7ca3450 in __libc_start_main () from /lib/tls/i686/cmov/libc.so.6
#6 0x0804c5b1 in ?? ()

Revision history for this message
David G (davidg-esentire) wrote :

I'm not sure if this is the same problem but using the pkg version of stunnel I stop being able to make connections after running for a while. My stunnel is now running fine using the current version of stunnel from source (4.26).

http://www.stunnel.org/news/ states that version 4.21 is experimental and that upgrading is suggested.

Revision history for this message
Roman Fiedler (roman-fiedler-deactivatedaccount) wrote :

It seems that this package/version is the standard with ubuntu-hardy. I found that there is already a new package in the pool/universe (http://archive.ubuntu.com/ubuntu/pool/universe/s/stunnel4/stunnel4_4.22-2_i386.deb) which seems to be working on hardy without problems. I'll try to stop it using the scripts above. If stunnel4 keeps functional, is it possible to update the package lists, so that this package is included in hardy?

Revision history for this message
Serge (serg-remote) wrote :

I've exactly the same issue, any updates on this?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.