Passing health check does not recover worker from its error state

Bug #2003189 reported by Sistemi CeSIA
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Apache2 Web Server
Confirmed
Unknown
apache2 (Ubuntu)
Fix Released
Undecided
Unassigned
Jammy
Triaged
Undecided
Michał Małoszewski
Kinetic
Triaged
Undecided
Michał Małoszewski

Bug Description

[Original Report]

While we were in the process of enabling mod_proxy_hcheck on some of our apache2 nodes we encountered an unusual behavior: sometimes, after rebooting a backend, its worker status remains marked as "Init Err" (balancer manager) until another request is made to the backend, no matter how many health checks complete successfully.

The following list shows the sequence of events leading to the problem:

1. Watchdog triggers health check, request is successful; worker status is "Init Ok"
2. HTTP request to apache2 with unreachable backend (rebooting); status becomes "Init Err"
3. Watchdog triggers another health check, request is again successful because the backend recovered; worker status remains "Init Err"
4. same as 3
5. same as 4

The only way for the worker status to recover is to wait for "hcfails" unsuccessful health checks and then again for "hcpasses" requests to be completed or just wait for legitimate traffic to retry the failed worker, which may not happen for a long time for rarely used applications.

This was surprising to us since we were expecting the worker status to be recovered after "hcpasses" successful health checks; however this doesn't seem to happen when the error status is triggered by ordinary traffic to the backend (i.e not health checks).

[Test Case]
# apt update && apt dist-upgrade -y
# apt install -y apache2 python3
# cat > /etc/apache2/sites-available/httpd-hcheck-initerr.conf << '__EOF__'
<VirtualHost *:80>
    ServerAdmin <email address hidden>
    DocumentRoot "/var/www/html"
    ServerName myapp.example.com
    ErrorLog "${APACHE_LOG_DIR}/myapp.example.com-error_log"
    CustomLog "${APACHE_LOG_DIR}/myapp.example.com-access_log" common

    ProxyPass / balancer://myapp stickysession=JSESSIONID

</VirtualHost>
__EOF__
# cat > /etc/apache2/conf-available/balancers.conf << '__EOF__'
<Proxy balancer://myapp>
    BalancerMember http://127.0.0.1:8080/ route=app-route hcmethod=GET hcinterval=5 hcpasses=1 hcfails=1
</Proxy>
__EOF__
# a2enmod status
# a2enmod proxy
# a2enmod proxy_balancer
# a2enmod proxy_http
# a2enmod proxy_hcheck
# a2enmod lbmethod_byrequests
# a2enconf balancers
# a2ensite httpd-hcheck-initerr
# python3 -m http.server --bind 127.0.0.1 8080 &
# PYTHON_PID=$!
# systemctl restart apache2
# curl -s localhost/server-status?auto | grep ProxyBalancer | grep Status
# kill -9 $PYTHON_PID
# curl -s localhost -H 'host: myapp.example.com' -o /dev/null
# curl -s localhost/server-status?auto | grep ProxyBalancer | grep Status
# python3 -m http.server --bind 127.0.0.1 8080 &
# sleep 10
# curl -s localhost/server-status?auto | grep ProxyBalancer | grep Status

Example failed output:

Serving HTTP on 127.0.0.1 port 8080 (http://127.0.0.1:8080/) ...
ProxyBalancer[0]Worker[0]Status: Init Ok
scripts: line 6: 203609 Killed python3 -m http.server --bind 127.0.0.1 8080
ProxyBalancer[0]Worker[0]Status: Init Err
Serving HTTP on 127.0.0.1 port 8080 (http://127.0.0.1:8080/) ...
127.0.0.1 - - [18/Jan/2023 12:24:05] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [18/Jan/2023 12:24:10] "GET / HTTP/1.0" 200 -
ProxyBalancer[0]Worker[0]Status: Init Err

Example of (expected) successful output:

Serving HTTP on 127.0.0.1 port 8080 (http://127.0.0.1:8080/) ...
ProxyBalancer[0]Worker[0]Status: Init Ok
scripts: line 6: 202190 Killed python3 -m http.server --bind 127.0.0.1 8080
ProxyBalancer[0]Worker[0]Status: Init Err
Serving HTTP on 127.0.0.1 port 8080 (http://127.0.0.1:8080/) ...
127.0.0.1 - - [18/Jan/2023 12:23:12] "GET / HTTP/1.0" 200 -
127.0.0.1 - - [18/Jan/2023 12:23:17] "GET / HTTP/1.0" 200 -
ProxyBalancer[0]Worker[0]Status: Init Ok

Upstream bug: https://bz.apache.org/bugzilla/show_bug.cgi?id=66302
Upstream fix: https://svn.apache.org/viewvc?view=revision&revision=1906496

We would like to see this fix backported in Jammy.

Related branches

Revision history for this message
In , Alessandro-cavalier7 (alessandro-cavalier7) wrote :

Created attachment 38407
mod_proxy_hcheck: recover worker from error state

While we were in the process of enabling mod_proxy_hcheck on some of our apache2 nodes we encountered an unusual behavior: sometimes, after rebooting a backend, its worker status remains marked as "Init Err" (balancer manager) until another request is made to the backend, no matter how many health checks complete successfully.

The following list shows the sequence of events leading to the problem:

1. Watchdog triggers health check, request is successful; worker status is "Init Ok"
2. HTTP request to apache2 with unreachable backend (rebooting); status becomes "Init Err"
3. Watchdog triggers another health check, request is again successful because the backend recovered; worker status remains "Init Err"
4. same as 3
5. same as 4

The only way for the worker status to recover is to wait for "hcfails" unsuccessful health checks and then again for "hcpasses" requests to be completed or just wait for legitimate traffic to retry the failed worker, which may not happen for a long time for rarely used applications.

This was surprising to us since we were expecting the worker status to be recovered after "hcpasses" successful health checks; however this doesn't seem to happen when the error status is triggered by ordinary traffic to the backend (i.e not health checks).

We believe this behavior was accidentally introduced in r1725523. The patch we are proposing seems to fix the problem in our environment.

Revision history for this message
In , V-jiz-h (v-jiz-h) wrote :

Thanks for the report.

In general, Health Check errors are considered different from "normal" errors and I can see why the behavior below is both confusing and could be considered "wrong".

The patch looks like a reasonable approach.

Revision history for this message
Paride Legovini (paride) wrote :

Hello and thanks for this bug report and for the reproducer. According to the upstream release notes [1] this bug is fixed in apache2 2.4.55, which is currently in Debian unstable but not in Lunar, so step 1 for fixing this is merging apache2 from Debian to Ubuntu. From what I can tell from the Debian d/changelog this shouldn't be a problematic merge.

Then we'll have to cherry-pick the patch you identified and apply it to the apache2 package in Jammy and Kinetic, following the SRU process [2].

---

From your comment in the linked upstream bug looks like you already rebuild the package with the fix for your local environment. Moreover I see that you already formatted the bug report in a way that is similar to the SRU bug template (again [2]).

If you're interested in getting involved in Ubuntu development I think you are in a good position here. The process the Server Team follows to do SRUs is documented in [3], and I volunteer to offer some guidance where needed, just let us know. (Should you want to get involved I suggest turning your account into a "personal" one, as in my understanding it is now a team account.)

Paride

[1] https://downloads.apache.org/httpd/CHANGES_2.4.55
[2] https://wiki.ubuntu.com/StableReleaseUpdates
[3] https://github.com/canonical/ubuntu-maintainers-handbook/blob/main/PackageFixing.md

tags: added: server-todo
Changed in apache2 (Ubuntu):
status: New → Triaged
Changed in apache2 (Ubuntu Jammy):
status: New → Triaged
Changed in apache2 (Ubuntu Kinetic):
status: New → Triaged
Paride Legovini (paride)
tags: added: needs-merge
Changed in apache2:
status: Unknown → Confirmed
Changed in apache2 (Ubuntu Jammy):
assignee: nobody → Michał Małoszewski (michal-maloszewski99)
Changed in apache2 (Ubuntu Kinetic):
assignee: nobody → Michał Małoszewski (michal-maloszewski99)
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

It is in Lunar:
 apache2 | 2.4.55-1ubuntu1 | lunar | source, amd64, arm64, armhf, i386, ppc64el, riscv64, s390x
Set fixed in Lunar.

Michal will have a look, verifying the testcase, apply the patch for Jammy and Kinetic, build a PPA and then get this into SRU processing.

Changed in apache2 (Ubuntu):
status: Triaged → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.