Regression running causes 2 apache process to eat up 100% cpu, easy DoS

Bug #1836329 reported by Stefan Huehner on 2019-07-12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
apache2 (Ubuntu)
Status tracked in Eoan
Andreas Hasenack
Andreas Hasenack

Bug Description

With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.

Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.

[Test Case]
We didn't find a reproducer that didn't involve, so the test case needs a publicly facing server with a DNS record.

On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart

In a terminal, monitor the apache2 processes CPU usage with top.

Go to and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.

After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.

With the fixed packages, the apache processes will remain idle.

[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
We do have a DEP8 test that covers HTTP/2 SSL downloads, and it passes. But it also passed before this patch. I also manually tried such downloads of varying sizes (up to 10Mbytes) with no failures.

[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values:
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
The d/t/run-test-suite DEP8 test is falsely returning success, but it's not running due to being called as root, and it doesn't fail either. I filed bug #1836898 about this, and ran it manually for both cosmic and bionic. There is one test failure, but it's a silly one, introduced by a patch that added a comment. The test actually parses C comments in that particular header file. The bug has the details.

cosmic patched to actually run the testsuite, showing that failure:

Same for bionic:

[Original Description]

With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.

Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.

So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.

But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?

So likely regression in the update to 4.7 having only single fix:

Extra info observed when that ssltest is over but processes are still there using up cpu:

/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 http/1.1 GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 http/1.1

netstat on system:
tcp6 1 0 CLOSE_WAIT
tcp6 1 0 CLOSE_WAIT

with on other connections to 443 port.

Related branches

Andreas said (on the other bug) he wants to look into this today - assigning him and set prio to high according to the discussion so far.

@Stefan - since is external have you found anything else, maybe something reproducible locally without needing to expose the test host to the internet that would trigger the same?

Changed in apache2 (Ubuntu):
importance: Undecided → High
assignee: nobody → Andreas Hasenack (ahasenack)

I followed [1] to do some checks against the version reported to be bad.

The easiest copy and paste setup would be:
$ apt install apache2
$ IP=$(hostname -i | cut -d' ' -f 2)
$ sed -i -e "/ServerAdmin/a ServerName $IP" -e 's/ssl-cert-snakeoil.pem/apache-selfsigned.crt/' -e 's/ssl-cert-snakeoil.key/apache-selfsigned.key/' /etc/apache2/sites-available/default-ssl.conf
$ sed -i -e "/ServerAdmin/a Redirect \"/\" \"https://$IP/\"" /etc/apache2/sites-available/000-default.conf
$ sudo cat << EOF > /etc/apache2/conf-available/ssl-params.conf
SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLHonorCipherOrder On
Header always set X-Frame-Options DENY
Header always set X-Content-Type-Options nosniff
SSLCompression off
SSLUseStapling on
SSLStaplingCache "shmcb:logs/stapling-cache(150000)"
SSLSessionTickets Off
$ (sleep 2s; printf "\n"; sleep 2s; printf "\n"; sleep 2s; printf "\n"; sleep 2s; printf "\n"; sleep 2s; printf "\n"; sleep 2s; printf "$IP\n"; sleep 2s; printf "\n";) | sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt
$ sudo a2enmod ssl
$ sudo a2enmod headers
$ sudo a2ensite default-ssl
$ sudo a2enconf ssl-params
$ sudo apache2ctl configtest
$ sudo systemctl restart apache2

The above works in a LXD container and would give a basic setup to test.
Basic usage of this server was fine for me, doing some ssl checks now ...


Since ssllabs [1] is only for exposed hosts I checked their list of other tools [2].
From there I picked the offline tools that seemed usable locally [3][4][5]


#1 cipherscan [3]
./cipherscan -servername
Certificate: untrusted, bits, signature
TLS ticket lifetime hint:
NPN protocols:
OCSP stapling: not supported
Cipher ordering: server
Curves ordering: none - fallback: no
Renegotiation test error
Supported compression methods test error

TLS Tolerance: no
Fallbacks required:
big-SSLv3 config not supported, connection failed
big-TLSv1.0 config not supported, connection failed
big-TLSv1.1 config not supported, connection failed
big-TLSv1.2 config not supported, connection failed
small-SSLv3 config not supported, connection failed
small-TLSv1.0 config not supported, connection failed
small-TLSv1.0-notlsext config not supported, connection failed
small-TLSv1.1 config not supported, connection failed
small-TLSv1.2 config not supported, connection failed
v2-big-TLSv1.2 no fallback req, connected: TLSv1.2 DHE-RSA-AES256-GCM-SHA384
v2-small-SSLv3 config not supported, connection failed
v2-small-TLSv1.0 config not supported, connection failed
v2-small-TLSv1.1 config not supported, connection failed
v2-small-TLSv1.2 no fallback req, connected: TLSv1.2 DHE-RSA-AES256-SHA

Intolerance to:
 SSL 3.254 : absent
 TLS 1.2 : absent
 TLS 1.3 : absent
 TLS 1.4 : absent

Download full text (5.8 KiB)

#2 sslyze [4]
$ apt install python-pip
$ pip install --upgrade setuptools
$ pip install --upgrade sslyze
$ python -m sslyze --regular



 ----------------------------- =>


 * OpenSSL CCS Injection:
                                          OK - Not vulnerable to OpenSSL CCS injection

 * Session Renegotiation:
       Client-initiated Renegotiation: OK - Rejected
       Secure Renegotiation: OK - Supported

 * OpenSSL Heartbleed:
                                          OK - Not vulnerable to Heartbleed

 * Resumption Support:
      With Session IDs: OK - Supported (5 successful, 0 failed, 0 errors, 5 total attempts).
      With TLS Tickets: NOT SUPPORTED - TLS ticket not assigned.

 * SSLV3 Cipher Suites:
      Server rejected all cipher suites.

 * TLSV1 Cipher Suites:
      Server rejected all cipher suites.

 * SSLV2 Cipher Suites:
      Server rejected all cipher suites.

 * TLSV1_3 Cipher Suites:
      Server rejected all cipher suites.

 * Downgrade Attacks:
       TLS_FALLBACK_SCSV: OK - Supported

 * TLSV1_2 Cipher Suites:
       Forward Secrecy OK - Supported
       RC4 OK - Not Supported

        TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ECDH-256 bits 256 bits HTTP 200 OK
        TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 DH-2048 bits 256 bits HTTP 200 OK
        TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA ECDH-256 bits 256 bits HTTP 200 OK
        TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 ECDH-256 bits 256 bits HTTP 200 OK
        DHE_RSA_WITH_AES_256_CCM_8 - 256 bits HTTP 200 OK
        TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ECDH-256 bits 256 bits HTTP 200 OK
        TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 DH-2048 bits 256 bits HTTP 200 OK
        TLS_DHE_RSA_WITH_AES_256_CBC_SHA DH-2048 bits 256 bits HTTP 200 OK
        TLS_DHE_RSA_WITH_AES_256_CCM - 256 bits HTTP 200 OK
        TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ECDH-256 bits 128 bits HTTP 200 OK ...


Download full text (10.8 KiB)

#3 testssl [5]

$ wget
$ tar xf 3.0rc5.tar.gz
$ cd
$ ./

########################################################### 3.0rc5 from

      This program is free software. Distribution and
             modification under GPLv2 permitted.

       Please file bugs @


 Using "OpenSSL 1.0.2-chacha (1.0.2k-dev)" [~183 ciphers]
 on e:./bin/openssl.Linux.x86_64
 (built: "Jan 18 17:12:17 2019", platform: "linux-x86_64")

 Start 2019-07-15 07:03:37 -->> ( <<--

 rDNS ( b.lxd.
 Service detected: HTTP

 Testing protocols via sockets except NPN+ALPN

 SSLv2 not offered (OK)
 SSLv3 not offered (OK)
 TLS 1 not offered
 TLS 1.1 not offered
 TLS 1.2 offered (OK)
 TLS 1.3 not offered
 NPN/SPDY not offered
 ALPN/HTTP2 http/1.1 (offered)

 Testing cipher categories

 NULL ciphers (no encryption) not offered (OK)
 Anonymous NULL Ciphers (no authentication) not offered (OK)
 Export ciphers (w/o ADH+NULL) not offered (OK)
 LOW: 64 Bit + DES, RC[2,4] (w/o export) not offered (OK)
 Triple DES Ciphers / IDEA not offered (OK)
 Average: SEED + 128+256 Bit CBC ciphers offered
 Strong encryption (AEAD ciphers) offered (OK)

 Testing robust (perfect) forward secrecy, (P)FS -- omitting Null Authentication/Encryption, 3DES, RC4

                              DHE-RSA-AES256-SHA ECDHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES128-GCM-SHA256
 Elliptic curves offered: prime256v1 secp384r1 secp521r1 X25519 X448
 DH group offered: RFC3526/Oakley Group 14 (2048 bits)

 Testing server preferences

 Has server cipher order? yes (OK)
 Negotiated protocol TLSv1.2
 Negotiated cipher ECDHE-RSA-AES256-GCM-SHA384, 256 bit ECDH (P-256)
 Cipher order
               DHE-RSA-AES256-CCM DHE-RSA-AES256-SHA256 DHE-RSA-AES256-SHA

 Testing server defaults (Server Hello)

 TLS extensions (standard) "renegotiation info/#65281" "EC point formats/#11" "max fragment length/#1" "application layer protocol negotiation/#16" "encrypt-then-mac/#22"
                              "extended master secret/#23"
 Session Ticket RFC 5077 hint no -- no lifetime advertised
 SSL Session ID support yes
 Session Resumption Tickets no, ID: yes
 TLS clock skew Random values, no fingerprinting possible
 Signature Algorithm SHA256 with RSA
 Server key size RSA 2048 bits
 Server key usage --...

Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in apache2 (Ubuntu):
status: New → Confirmed
Giraffe (dodger-forum) wrote :

Can confirm this bugs, affects our server as well.

#1, #2 and #3 gave me nothing in terms of "load" on the target apache server after the actual run.
I had the expected worker threads around but those were idling.

I had no connections in CLOSE_WAIT state nor showing up as reading in the server status module.

@Stefan - we have two obvious paths from here.
a) only the ssllabs test causes this, which will make testing of usual non exposed debug systems much harder
b) some other part of your setup combined with the new version triggers this behavior.

You could help tremendously if you'd have the options to set up either:
- verify (A): setup a default config like I outlined but exposed to the internet and run the ssllabs test against it (if you have the mans to do so)
- verify (b): run the tests mentioned above (and others if you know/have them) against your existing setup. Does and of them trigger the bad behavior for your setup?
If it does try to drop custom config one by one until we have identified which part is the critical one.

Marking incomplete to get this extra info, I hope our both pre-work will help Andreas (who will show up in a bit) to get a fast start on this.

Changed in apache2 (Ubuntu):
status: Confirmed → Incomplete

@Giraffe - glad for this extra confirmation, it would be great if you could help as well to trim down the test to something more easily reproducible.

Since we have more affected people I'll raise severity and set the regression tag.
If only the initial bug fixed wouldn't be severe as well I'd already upload a revert for now.
But I'll leave that decision to Andreas who will be here in a bit.

I might try to sort out the upstream patch and create a test PPA if something feasible comes up, but for now the biggest step forward would be a simplification of the repro-steps.

Changed in apache2 (Ubuntu):
importance: High → Critical
Giraffe (dodger-forum) wrote :

I'll see what i can do, happy to help out.

Would it help if i e-mail you our "/etc/apache2/mods-enabled/ssl.conf" config and a copy of "ssl.conf"?

If non confidential attaching it here would be best, so I can try with exactly your settings.
If confidential still the best would be to replace the few critical parts with like <INSERTYOURVALUE> and then attach it here.
Only if all of the above is not feasible mailing it would be the fallback (which would still be better than not having tried with your config at all, so thanks for the offer).

tags: added: regression-update
Giraffe (dodger-forum) wrote :
Giraffe (dodger-forum) wrote :

I dropped my former ssl-params.conf and added from you for the mod config:
+ SSLHonorCipherOrder on
+ SSLOpenSSLConfCmd Curves brainpoolP512r1:brainpoolP384r1:brainpoolP256r1:P-521:P-384:X448:X25519:P-256
+ SSLOpenSSLConfCmd DHParameters "/etc/ssl/dhparam.pem"
+ SSLSessionTickets off
+ SSLOpenSSLConfCmd Options +PrioritizeChaCha

Only the /etc/ssl/dhparam.pem needs adaption for me generatign a custom pem:
$ openssl dhparam -out /etc/ssl/dhparam.pem 2048

And in the default-ssl.conf I added from you (the rest was just mail and our different certificates).

section #1 is the same as added in mod-ssl.conf
+ HSTS header
+ http2

And outside the /VirtualHost context you added the OCSP Stapling

With all the tests thrown against it I still get no hanging apache workers in my case.

That seems all the difference we had in our config (unless we missed other files being important).
That leaves:
- the ssllabs test does something special we haven't captured yet in our try to recreate
- it depends on the certificate itself (my self signed vs whatever real cert you use)
- other config is important that we haven't found yet

TODO that remains:
- if one could run ssllabs test against such an easy default self signed setup as I have outlined here (to verify that it triggers the issue and is not depending on further config).
- experiment and find a local test tool that triggers the issue

Giraffe (dodger-forum) wrote :

If you need more info about the system the server runs on I'll be happy to share them privately, if so, please let me know.

I have set up what I described in comment #2 in a openstack instance that has a public IP.
That already was a crap of setup work, but now I realized that I also need dns to be able to use ssllabs check :-/

I have had some free DNS entries to configure and for now have set one of my domains to point to that test host.

@ahasenack - your ssh key is imported on the system which has this ssl setup and hopefully later today is DNS propagated enough to be able to run the ssllabs test against it.

@ahasenack - I'll share the domain privately so hat you can continue testing.

OK, DNS propagation has happened.

Scanning with ssllabs test:
- 2.4.29-1ubuntu4.6 (all ok)
- 2.4.29-1ubuntu4.7 (ok as well)

I see worker threads start while the test is running, like 32 or so.
But never does the cpu consumption peak to anything high.

Nor do processes hang around in CLOSED_WAIT state.

There must be something else to it.

Next I ran:
- 2.4.29-1ubuntu4.7 + giraffe config which seems to trigger the bug

And I got:
~100% CPU top:
4375 www-data 20 0 758280 7072 4944 S 88.2 1.5 4:32.91 apache2
tcp6 1 0 paelzer-apache-bu:https CLOSE_WAIT


Now this might be flaky, I'll reset and try again.

Tried 2 more times with this config we are now at 3/3 hits.
Seems reproducible enough?

Difference in ssllabs output:
 HTTP Strict Transport Security (HSTS) with long duration deployed on this server.
 Which is green but downgrades the protocol result by 5%
 Anyway, this is one of the changes that we will disable when hunting for the critical config.

The section the 100% starts to show up is labelled "Testing renegotiation" which would match expectations as the fix was about SSL renegotiation.

I'm now dropping config differences one by one ...

I tried my local tools and testssh also tests renegotiation but doesn't trigger this :-/
sslyze also has:
 * Session Renegotiation:
       Client-initiated Renegotiation: OK - Rejected
       Secure Renegotiation: OK - Supported

$ openssl s_client -connect
Secure Renegotiation IS supported

#HSTS Header
Header always set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"
=> Still triggering ...

#Enable http2
Protocols h2 http/1.1
SSLUseStapling on
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off
SSLStaplingCache shmcb:/var/run/ocsp(128000)
=> Still triggering ...

Disable (in mod and site config):
SSLSessionTickets off
SSLOpenSSLConfCmd Options +PrioritizeChaCha
=> Still triggering ...

Disable (in mod and site config):
SSLHonorCipherOrder on
SSLOpenSSLConfCmd Curves X448:X25519:P-256:P-384
SSLOpenSSLConfCmd DHParameters "/etc/ssl/dhparam.pem"
=> Still triggering ...

This matches my comment #2 config now.
So the flaky part was the one time it worked find on the initial run?

Retrying two more times ...
Yeah base config still triggers the issue ...
So other than first assumed it was either
a) not the config
b) we needed to run multiple tests to enter some bad state (but apache restarts in between)

In any of the abvoe cases, @andreas you can use the system to test and builds that you have.
Just take a potential pass with a grain of salt and rerun it a few times.

I downgraded to 2.4.29-1ubuntu4.6 and confirm the issue not showing up there.
I upgraded to 2.4.29-1ubuntu4.7 and can trigger it again.

You can up/downgrade with:
$ sudo dpkg -i ~/apachedebs/2.4.29-1ubuntu4.7/*
$ sudo dpkg -i ~/apachedebs/2.4.29-1ubuntu4.6/*

The test is at:
and you can restart it with "clear cache"

Leaving the test system in this state for Andreas to continue.

Changed in apache2 (Ubuntu):
status: Incomplete → In Progress
Andreas Hasenack (ahasenack) wrote :

eoan is fine

Changed in apache2 (Ubuntu Bionic):
status: New → In Progress
assignee: nobody → Andreas Hasenack (ahasenack)
importance: Undecided → Critical
Changed in apache2 (Ubuntu Eoan):
status: In Progress → Fix Released
Andreas Hasenack (ahasenack) wrote :

Cosmic affected (2.4.34-1ubuntu2.2)

Andreas Hasenack (ahasenack) wrote :

Disco is also clean, as expected.

Changed in apache2 (Ubuntu Disco):
status: New → Fix Released
Changed in apache2 (Ubuntu Eoan):
importance: Critical → Undecided
assignee: Andreas Hasenack (ahasenack) → nobody
Andreas Hasenack (ahasenack) wrote :

Actually, disco and eoan never had this bug, so the correct status for those tasks is "invalid".

Changed in apache2 (Ubuntu Disco):
status: Fix Released → Invalid
Changed in apache2 (Ubuntu Eoan):
status: Fix Released → Invalid
Andreas Hasenack (ahasenack) wrote :

I applied and and that seems to work. Will clean the packaging up with those patches, check if perhaps only one is needed.

PPA with bionic test packages at

TJ (tj) wrote :

@ahasenack: 2.4.29-1ubuntu4.8~sslreadrc~ppa3 confirmed working without processes spinning

Steve Beattie (sbeattie) on 2019-07-15
information type: Public → Public Security

@ahasenack: Installing your 2.4.29-1ubuntu4.8~sslreadrc~ppa3 in copy of the server where i initially discovered the issue make it not reproducible anymore.

If you need more testing later as you indirectly said in #27 just let me know i keep that testing instance around.

@paelzer: Sorry for not replying to your questions yesterday only got back to check on the issue today.

Andreas Hasenack (ahasenack) wrote :

Thanks TJ, Stefan and Christian

I'm cleaning up the packaging, properly formatting the patches, and doing a quick check if I really need both patches or just the last one I added. I'll post updates here, and then prepare an SRU.

Andreas Hasenack (ahasenack) wrote :

Only is needed to fix this issue reported here, but I'll include as well since it's a change regarding openssl 1.1.1 that might bite us in the future. Unless testing reveals another surprise.

description: updated
description: updated
description: updated
description: updated
description: updated
description: updated
Andreas Hasenack (ahasenack) wrote :

I suddenly realized cosmic is EOL. I'll let the sru team weigh in if this fix can still be published for cosmic or not.

Changed in apache2 (Ubuntu Cosmic):
status: New → In Progress
importance: Undecided → Critical
assignee: nobody → Andreas Hasenack (ahasenack)
Steve Langasek (vorlon) wrote :

Because it is an SRU regression (apache2 2.4.34-1ubuntu2.2 introduced the same change to cosmic that apache2 2.4.29-1ubuntu4.7 did for bionic), I would accept a fix for cosmic for expedited SRU verification despite the EOL being in 2 days time.

Uploaded to cosmic and bionic proposed queues, unapproved.

description: updated
description: updated
To post a comment you must log in.
This report contains Public Security information  Edit
Everyone can see this security related information.

Other bug subscribers