2019-07-12 08:11:37 |
Stefan Huehner |
bug |
|
|
added bug |
2019-07-15 05:58:01 |
Christian Ehrhardt |
apache2 (Ubuntu): importance |
Undecided |
High |
|
2019-07-15 05:58:10 |
Christian Ehrhardt |
apache2 (Ubuntu): assignee |
|
Andreas Hasenack (ahasenack) |
|
2019-07-15 05:58:12 |
Christian Ehrhardt |
bug |
|
|
added subscriber Christian Ehrhardt |
2019-07-15 05:58:21 |
Christian Ehrhardt |
bug |
|
|
added subscriber Andreas Hasenack |
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2009-3555 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2011-3389 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2012-4929 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2013-0169 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2013-2566 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2013-3587 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2014-0160 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2014-0224 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2014-3566 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2015-0204 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2015-2808 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2015-4000 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2016-0703 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2016-0800 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2016-2183 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2016-6329 |
|
2019-07-15 07:06:00 |
Christian Ehrhardt |
cve linked |
|
2016-9244 |
|
2019-07-15 07:09:41 |
Launchpad Janitor |
apache2 (Ubuntu): status |
New |
Confirmed |
|
2019-07-15 07:23:17 |
Christian Ehrhardt |
apache2 (Ubuntu): status |
Confirmed |
Incomplete |
|
2019-07-15 07:29:47 |
Christian Ehrhardt |
apache2 (Ubuntu): importance |
High |
Critical |
|
2019-07-15 07:49:38 |
Christian Ehrhardt |
tags |
|
regression-update |
|
2019-07-15 08:36:05 |
Giraffe |
attachment added |
|
mods-enabled/ssl.conf https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1836329/+attachment/5277068/+files/ssl.conf |
|
2019-07-15 08:36:40 |
Giraffe |
attachment added |
|
/sites-enabled/000-default-ssl.conf https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1836329/+attachment/5277069/+files/000-default-ssl.conf |
|
2019-07-15 12:55:36 |
Andreas Hasenack |
cve unlinked |
2009-3555 |
|
|
2019-07-15 12:55:52 |
Andreas Hasenack |
cve unlinked |
2011-3389 |
|
|
2019-07-15 12:56:06 |
Andreas Hasenack |
cve unlinked |
2012-4929 |
|
|
2019-07-15 12:56:20 |
Andreas Hasenack |
cve unlinked |
2013-0169 |
|
|
2019-07-15 12:56:32 |
Andreas Hasenack |
cve unlinked |
2013-2566 |
|
|
2019-07-15 12:56:44 |
Andreas Hasenack |
cve unlinked |
2013-3587 |
|
|
2019-07-15 12:57:00 |
Andreas Hasenack |
cve unlinked |
2014-0160 |
|
|
2019-07-15 12:57:12 |
Andreas Hasenack |
cve unlinked |
2014-0224 |
|
|
2019-07-15 12:57:34 |
Andreas Hasenack |
cve unlinked |
2014-3566 |
|
|
2019-07-15 12:57:49 |
Andreas Hasenack |
cve unlinked |
2015-0204 |
|
|
2019-07-15 12:58:03 |
Andreas Hasenack |
cve unlinked |
2015-2808 |
|
|
2019-07-15 12:58:15 |
Andreas Hasenack |
cve unlinked |
2015-4000 |
|
|
2019-07-15 12:58:27 |
Andreas Hasenack |
cve unlinked |
2016-0703 |
|
|
2019-07-15 12:58:41 |
Andreas Hasenack |
cve unlinked |
2016-0800 |
|
|
2019-07-15 12:58:55 |
Andreas Hasenack |
cve unlinked |
2016-2183 |
|
|
2019-07-15 12:59:08 |
Andreas Hasenack |
cve unlinked |
2016-6329 |
|
|
2019-07-15 12:59:20 |
Andreas Hasenack |
cve unlinked |
2016-9244 |
|
|
2019-07-15 14:37:06 |
Andreas Hasenack |
apache2 (Ubuntu): status |
Incomplete |
In Progress |
|
2019-07-15 19:35:36 |
Andreas Hasenack |
nominated for series |
|
Ubuntu Eoan |
|
2019-07-15 19:35:36 |
Andreas Hasenack |
bug task added |
|
apache2 (Ubuntu Eoan) |
|
2019-07-15 19:35:53 |
Andreas Hasenack |
nominated for series |
|
Ubuntu Bionic |
|
2019-07-15 19:35:53 |
Andreas Hasenack |
bug task added |
|
apache2 (Ubuntu Bionic) |
|
2019-07-15 19:36:01 |
Andreas Hasenack |
apache2 (Ubuntu Bionic): status |
New |
In Progress |
|
2019-07-15 19:36:03 |
Andreas Hasenack |
apache2 (Ubuntu Bionic): assignee |
|
Andreas Hasenack (ahasenack) |
|
2019-07-15 19:36:06 |
Andreas Hasenack |
apache2 (Ubuntu Bionic): importance |
Undecided |
Critical |
|
2019-07-15 19:36:09 |
Andreas Hasenack |
apache2 (Ubuntu Eoan): status |
In Progress |
Fix Released |
|
2019-07-15 19:40:03 |
Andreas Hasenack |
nominated for series |
|
Ubuntu Cosmic |
|
2019-07-15 19:40:03 |
Andreas Hasenack |
bug task added |
|
apache2 (Ubuntu Cosmic) |
|
2019-07-15 19:58:31 |
Andreas Hasenack |
nominated for series |
|
Ubuntu Disco |
|
2019-07-15 19:58:31 |
Andreas Hasenack |
bug task added |
|
apache2 (Ubuntu Disco) |
|
2019-07-15 19:58:37 |
Andreas Hasenack |
apache2 (Ubuntu Disco): status |
New |
Fix Released |
|
2019-07-15 19:59:00 |
Andreas Hasenack |
apache2 (Ubuntu Eoan): importance |
Critical |
Undecided |
|
2019-07-15 19:59:03 |
Andreas Hasenack |
apache2 (Ubuntu Eoan): assignee |
Andreas Hasenack (ahasenack) |
|
|
2019-07-15 20:02:17 |
Andreas Hasenack |
apache2 (Ubuntu Disco): status |
Fix Released |
Invalid |
|
2019-07-15 20:02:19 |
Andreas Hasenack |
apache2 (Ubuntu Eoan): status |
Fix Released |
Invalid |
|
2019-07-15 23:00:34 |
Haw Loeung |
bug |
|
|
added subscriber Haw Loeung |
2019-07-15 23:11:30 |
Steve Beattie |
information type |
Public |
Public Security |
|
2019-07-16 09:38:15 |
Benjamin Baumer |
bug |
|
|
added subscriber Benjamin Baumer |
2019-07-16 19:34:13 |
Andreas Hasenack |
description |
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
[Impact]
* An explanation of the effects of the bug on users and
* justification for backporting the fix to the stable release.
* In addition, it is helpful, but not required, to include an
explanation of how the upload fixes this bug.
[Test Case]
* detailed instructions how to reproduce the bug
* these should allow someone who is not familiar with the affected
package to reproduce the bug and verify that the updated package fixes
the problem.
[Regression Potential]
* discussion of how regressions are most likely to manifest as a result of this change.
* It is assumed that any SRU candidate patch is well-tested before
upload and has a low overall risk of regression, but it's important
to make the effort to think about what ''could'' happen in the
event of a regression.
* This both shows the SRU team that the risks have been considered,
and provides guidance to testers in regression-testing the SRU.
[Other Info]
* Anything else you think is useful to include
* Anticipate questions from users, SRU, +1 maintenance, security teams and the Technical Board
* and address these questions in advance
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
|
2019-07-16 19:41:37 |
Andreas Hasenack |
description |
[Impact]
* An explanation of the effects of the bug on users and
* justification for backporting the fix to the stable release.
* In addition, it is helpful, but not required, to include an
explanation of how the upload fixes this bug.
[Test Case]
* detailed instructions how to reproduce the bug
* these should allow someone who is not familiar with the affected
package to reproduce the bug and verify that the updated package fixes
the problem.
[Regression Potential]
* discussion of how regressions are most likely to manifest as a result of this change.
* It is assumed that any SRU candidate patch is well-tested before
upload and has a low overall risk of regression, but it's important
to make the effort to think about what ''could'' happen in the
event of a regression.
* This both shows the SRU team that the risks have been considered,
and provides guidance to testers in regression-testing the SRU.
[Other Info]
* Anything else you think is useful to include
* Anticipate questions from users, SRU, +1 maintenance, security teams and the Technical Board
* and address these questions in advance
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
* discussion of how regressions are most likely to manifest as a result of this change.
* It is assumed that any SRU candidate patch is well-tested before
upload and has a low overall risk of regression, but it's important
to make the effort to think about what ''could'' happen in the
event of a regression.
* This both shows the SRU team that the risks have been considered,
and provides guidance to testers in regression-testing the SRU.
[Other Info]
* Anything else you think is useful to include
* Anticipate questions from users, SRU, +1 maintenance, security teams and the Technical Board
* and address these questions in advance
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
|
2019-07-16 19:45:27 |
Andreas Hasenack |
description |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
* discussion of how regressions are most likely to manifest as a result of this change.
* It is assumed that any SRU candidate patch is well-tested before
upload and has a low overall risk of regression, but it's important
to make the effort to think about what ''could'' happen in the
event of a regression.
* This both shows the SRU team that the risks have been considered,
and provides guidance to testers in regression-testing the SRU.
[Other Info]
* Anything else you think is useful to include
* Anticipate questions from users, SRU, +1 maintenance, security teams and the Technical Board
* and address these questions in advance
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
* discussion of how regressions are most likely to manifest as a result of this change.
* It is assumed that any SRU candidate patch is well-tested before
upload and has a low overall risk of regression, but it's important
to make the effort to think about what ''could'' happen in the
event of a regression.
* This both shows the SRU team that the risks have been considered,
and provides guidance to testers in regression-testing the SRU.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
|
2019-07-16 19:59:51 |
Andreas Hasenack |
description |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
* discussion of how regressions are most likely to manifest as a result of this change.
* It is assumed that any SRU candidate patch is well-tested before
upload and has a low overall risk of regression, but it's important
to make the effort to think about what ''could'' happen in the
event of a regression.
* This both shows the SRU team that the risks have been considered,
and provides guidance to testers in regression-testing the SRU.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
|
2019-07-16 20:02:58 |
Andreas Hasenack |
description |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
We do have a DEP8 test that covers HTTP/2 SSL downloads, and it passes. But it also passed before this patch.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
|
2019-07-16 20:03:26 |
Andreas Hasenack |
description |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
We do have a DEP8 test that covers HTTP/2 SSL downloads, and it passes. But it also passed before this patch.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
We do have a DEP8 test that covers HTTP/2 SSL downloads, and it passes. But it also passed before this patch. I also manually tried such downloads of varying sizes (up to 10Mbytes) with no failures.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
|
2019-07-16 20:24:59 |
Launchpad Janitor |
merge proposal linked |
|
https://code.launchpad.net/~ahasenack/ubuntu/+source/apache2/+git/apache2/+merge/370217 |
|
2019-07-16 21:11:57 |
Launchpad Janitor |
merge proposal linked |
|
https://code.launchpad.net/~ahasenack/ubuntu/+source/apache2/+git/apache2/+merge/370222 |
|
2019-07-16 21:46:40 |
Andreas Hasenack |
apache2 (Ubuntu Cosmic): status |
New |
In Progress |
|
2019-07-16 21:46:42 |
Andreas Hasenack |
apache2 (Ubuntu Cosmic): importance |
Undecided |
Critical |
|
2019-07-16 21:47:01 |
Andreas Hasenack |
apache2 (Ubuntu Cosmic): assignee |
|
Andreas Hasenack (ahasenack) |
|
2019-07-17 13:07:22 |
Andreas Hasenack |
description |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
We do have a DEP8 test that covers HTTP/2 SSL downloads, and it passes. But it also passed before this patch. I also manually tried such downloads of varying sizes (up to 10Mbytes) with no failures.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
We do have a DEP8 test that covers HTTP/2 SSL downloads, and it passes. But it also passed before this patch. I also manually tried such downloads of varying sizes (up to 10Mbytes) with no failures.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
The d/t/run-test-suite DEP8 test is falsely returning success, but it's not running due to being called as root, and it doesn't fail either. I filed bug #1836898 about this, and ran it manually for both cosmic and bionic. There is one test failure, but it's a silly one, introduced by a patch that added a comment. The test actually parses C comments in that particular header file. The bug has the details.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
|
2019-07-17 13:10:09 |
Andreas Hasenack |
description |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
We do have a DEP8 test that covers HTTP/2 SSL downloads, and it passes. But it also passed before this patch. I also manually tried such downloads of varying sizes (up to 10Mbytes) with no failures.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
The d/t/run-test-suite DEP8 test is falsely returning success, but it's not running due to being called as root, and it doesn't fail either. I filed bug #1836898 about this, and ran it manually for both cosmic and bionic. There is one test failure, but it's a silly one, introduced by a patch that added a comment. The test actually parses C comments in that particular header file. The bug has the details.
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
[Impact]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
[Test Case]
We didn't find a reproducer that didn't involve https://ssllabs.com/ssltest, so the test case needs a publicly facing server with a DNS record.
On a test system that has a public IP and is reachable via https on a hostname (not just IP):
sudo apt update
sudo apt install apache2
sudo a2enmod ssl
sudo a2ensite default-ssl.conf
sudo service apache2 restart
In a terminal, monitor the apache2 processes CPU usage with top.
Go to https://www.ssllabs.com/ssltest/ and input the url to your test server, using https. After a few seconds, the site will ask you if it should ignore the certificate error, confirm, and let it continue the test.
After a few minutes, the test will finish and you will get a report. Go back to the terminal where top is running, and the apache2 processes will be spinning and using CPU, even though there isn't anymore traffic.
With the fixed packages, the apache processes will remain idle.
[Regression Potential]
This upload is already fixing a regression which fixed a previous regression (#1833039), which shows that the situation is tricky. The fix here (clear-retry-flags-before-abort.patch) is at least not changing anything in the previous patch from bug #1833039, so that fix was correct.
The second patch, for http/2 errors with openssl 1.1.1, unfortunately has no test case, and deals with error status and is specific to openssl 1.1.1. It's been applied upstream (and backported to the 2.4.x branch) for many months now. The trunk commit at http://svn.apache.org/viewvc?view=revision&revision=1843954 has a more elaborate explanation about behavior changes this does, and doesn't, introduce.
We do have a DEP8 test that covers HTTP/2 SSL downloads, and it passes. But it also passed before this patch. I also manually tried such downloads of varying sizes (up to 10Mbytes) with no failures.
[Other Info]
While investigating this issue, another fix for an openssl 1.1.1 issue was found in the apache upstream git repo which involves http2 and how the code handles SSL_read() return values: https://github.com/apache/httpd/commit/644cff9977efa322fe6c0ae3357a5b8cb1eeec11
No upstream bug was found, nor could I come up with a reproducer case, but it seemed sensible to include that patch in this SRU, which was, after all, triggered by the openssl 1.1.1 upgrade in bionic.
The d/t/run-test-suite DEP8 test is falsely returning success, but it's not running due to being called as root, and it doesn't fail either. I filed bug #1836898 about this, and ran it manually for both cosmic and bionic. There is one test failure, but it's a silly one, introduced by a patch that added a comment. The test actually parses C comments in that particular header file. The bug has the details.
cosmic patched to actually run the testsuite, showing that failure:
http://people.ubuntu.com/~ahasenack/dep8-apache2-1836329-cosmic-actually-run-test-suite/log
Same for bionic:
http://people.ubuntu.com/~ahasenack/dep8-apache2-1836329-bionic-actually-run-test-suite/log
[Original Description]
With latest apache 2.4.29-1ubuntu4.7 published to 18.04 LTS bionic, when running ssllabs.com/ssltest against it to verify the configuration it leaves 2 apache processes using 100% indefinitely.
Downgrading to 2.4.29-1ubuntu4.6 make it not reproducible anymore.
So far i do not know if it is easy/likely to hit this case in normal https usage or only triggered by that testing site.
But given that this is backported to LTS and allows easy DoS maybe the 4.7 should be backed out?
So likely regression in the update to 4.7 having only single fix:
https://bugs.launchpad.net/ubuntu/+source/apache2/+bug/1833039
Extra info observed when that ssltest is over but processes are still there using up cpu:
/server-status shows both processes 25234,25235 here in 'Reading' state:
Srv PID Acc M CPU SS Req Conn Child Slot Client Protocol VHost Request
0-0 25234 0/0/0 W 0.00 0 0 0.0 0.00 0.00 127.0.0.1 http/1.1 ip-172-30-1-107.eu-west-1.compu GET /server-status HTTP/1.1
0-0 25234 0/0/0 R 0.00 641 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 505 2 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 501 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 500 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 494 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/1/1 _ 0.00 604 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 16.93 596 0 0.0 0.00 0.00 64.41.200.107 http/1.1
1-0 25235 0/1/1 _ 0.01 595 1 0.0 0.00 0.00 64.41.200.106 http/1.1
1-0 25235 0/0/0 R 0.00 679 0 0.0 0.00 0.00 64.41.200.106 http/1.1
netstat on system:
tcp6 1 0 172.30.1.57:443 64.41.200.106:58658 CLOSE_WAIT
tcp6 1 0 172.30.1.57:443 64.41.200.107:60842 CLOSE_WAIT
with on other connections to 443 port. |
|
2019-07-17 17:13:47 |
Steve Langasek |
apache2 (Ubuntu Cosmic): status |
In Progress |
Fix Committed |
|
2019-07-17 17:13:50 |
Steve Langasek |
bug |
|
|
added subscriber Ubuntu Stable Release Updates Team |
2019-07-17 17:13:51 |
Steve Langasek |
bug |
|
|
added subscriber SRU Verification |
2019-07-17 17:13:56 |
Steve Langasek |
tags |
regression-update |
regression-update verification-needed verification-needed-cosmic |
|
2019-07-17 17:15:36 |
Steve Langasek |
apache2 (Ubuntu Bionic): status |
In Progress |
Fix Committed |
|
2019-07-17 17:15:45 |
Steve Langasek |
tags |
regression-update verification-needed verification-needed-cosmic |
regression-update verification-needed verification-needed-bionic verification-needed-cosmic |
|
2019-07-17 18:17:52 |
Giraffe |
tags |
regression-update verification-needed verification-needed-bionic verification-needed-cosmic |
regression-update verification-done-bionic verification-needed verification-needed-cosmic |
|
2019-07-17 19:17:47 |
Andreas Hasenack |
tags |
regression-update verification-done-bionic verification-needed verification-needed-cosmic |
regression-update verification-done-bionic verification-done-cosmic verification-needed |
|
2019-07-18 18:34:50 |
Launchpad Janitor |
apache2 (Ubuntu Cosmic): status |
Fix Committed |
Fix Released |
|
2019-07-18 18:34:58 |
Steve Langasek |
removed subscriber Ubuntu Stable Release Updates Team |
|
|
|
2019-07-18 21:10:31 |
Launchpad Janitor |
apache2 (Ubuntu Bionic): status |
Fix Committed |
Fix Released |
|
2019-10-08 21:14:00 |
Robie Basak |
tags |
regression-update verification-done-bionic verification-done-cosmic verification-needed |
bionic-openssl-1.1 regression-update verification-done-bionic verification-done-cosmic verification-needed |
|