instances page fails to load if it takes more than 26 seconds

Bug #2045168 reported by Rodrigo Barbieri
20
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Dashboard Charm
Fix Committed
Undecided
Unassigned

Bug Description

Focal-ussuri customer env with lots of resources.

when trying to load the project>instance page, if the total amount of time loading data takes more than 26 seconds, the page enters a reload loop until the browser times out in 5 minutes.

The 26 seconds number was obtained in the following way:

1) 5 minute browser timeout was observed when trying to load the page
2) logs were inspected and noticed that some queries were taking very long, like glance ~12 secs, neutron ~8 seconds, etc. Queries to nova take at most 3 seconds.
3) in a separate env with zero resources where it would load instantly, I added a time.sleep in the api/glance.py file when invoking glance for images (glance is invoked multiple times when loading the instances page). Sleeping 14 seconds times out on 5 minutes, but sleeping 13 seconds does not timeout and loads quickly. When it times out with 14 seconds, I tailed the logs and noticed that the same group of requests were being repeated for a while, always starting with the flavors request. With the 13 seconds sleep the requests would not repeat.
4) Removed the sleep from the api/glance.py file and added a sleep of 26 secs in the project/instances/views.py file get_data method right after

image_dict, flavor_dict, volume_dict = futurist_utils.call_functions_parallel(self._get_images, self._get_flavors, self._get_volumes)

With 26 seconds sleep it does not timeout nor repeat the requests, the page loads fine. But with 27 seconds sleep it times out on 5 minutes and keeps repeating the requests on the logs.

My conclusion is that the get_data method does not tolerate taking longer than 26 seconds to finish loading the page, and "reloads" itself, entering a loop that never finishes if the page cannot be loaded in less than 26 seconds.

Ideally this internal timeout that causes a reload loop should be configurable and more tolerant by default.

description: updated
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

I logged the call stack around the sleep and compared the call stack of the first call with the one that loops, and they are identical, but on different threads. Apparently, it is wsgi or django that is spawning a new thread when it takes more than 30 seconds for the previous thread to finish.

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

I have tried all the parameters: queue-timeout=300 socket-timeout=300 connect-timeout=300 request-timeout=300 inactivity-timeout=300 startup-timeout=300 deadlock-timeout=300 graceful-timeout=300 eviction-timeout=300 shutdown-timeout=300 restart-interval=300

on WSGIDaemonProcess apache2 config but none worked to solve the issue

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

I did a bit more testing. By looking at the access.log file of apache2, I was able to see that if I try to load the page at T1 having configured a sleep of 35 seconds, I get a 200 OK line logged at T1+35, T1+70, T1+105, and so on.

Using TCPDUMP, I was able to confirm that my browser is re-sending requests every 30 seconds. So using the same example above, T1+30 seconds my browser re-sends the request, while T1+35 seconds the request is completed and printed in the access.log file, but it doesn't show in TCPDUMP as the response coming back (probably because it has been invalidated by the request being re-sent).

In the debug panel of Firefox (F12), it doesn't show multiple requests being sent, just the main request that times out after 5 minutes.

One other interesting thing, If I try to load the page, then restart apache2, the request isn't lost and the browser does not fail, it continues trying to load the page (I can see the request being re-sent in the logs and TCPDUMP). So apparently the request is not being managed by apache2, as it is being killed. Even if I do a systemctl stop and then start (to avoid restart just "reloading" instead of killing), the request still continues to be re-sent by the browser. Therefore apparently the connection and its 30 interval is being managed by the browser. I wonder if the 30 second value is something that is set on apache2 and provided to the browser via handshake.

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote (last edit ):

I decided to test using curl (should have thought of this earlier) and now it seems we are back to apache2 being the culprit:

curl 'http://<redacted>/project/instances/' -H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' -H 'Accept-Encoding: gzip, deflate' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Referer: http://<redacted>/project/key_pairs' -H 'Cookie: csrftoken=IAkreBpZF1Gc33o3QTxoeLWXW1oJgg6Z7MR1kodf9nxvFPet9l7er19JZHqDavKp; login_region=default; login_domain=admin_domain; sessionid=6xjezzmsyd0pvx65msld0vt3wh1g8anh' -H 'Upgrade-Insecure-Requests: 1' -vvvvvvvv --connect-timeout 600 --max-time 750
* Trying <redacted>:80...
* Connected to <redacted> (<redacted>) port 80 (#0)
> GET /project/instances/ HTTP/1.1
> Host: <redacted>
> User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0
> Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
> Accept-Language: en-US,en;q=0.5
> Accept-Encoding: gzip, deflate
> DNT: 1
> Connection: keep-alive
> Referer: http://<redacted>/project/key_pairs
> Cookie: csrftoken=IAkreBpZF1Gc33o3QTxoeLWXW1oJgg6Z7MR1kodf9nxvFPet9l7er19JZHqDavKp; login_region=default; login_domain=admin_domain; sessionid=6xjezzmsyd0pvx65msld0vt3wh1g8anh
> Upgrade-Insecure-Requests: 1
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server

Basically curl is receiving an empty response at the 30 seconds mark, despite the server is working on the request until completion as we can see in the access.log file:

<redacted> - - [07/Dec/2023:15:50:14 +0000] "GET /project/instances/ HTTP/1.1" 200 0 "http://<redacted>/project/key_pairs" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0"

even with all the parameters previously mentioned in comment #2, apache2 is responding within 30 seconds.

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

Adding loglevel debug to apache2 config does not print anything at the 30 second mark, but adds this log message when the request completes:

[Thu Dec 07 16:06:44.264144 2023] [wsgi:debug] [pid 554211:tid 140112348038912] src/server/mod_wsgi.c(2441): [remote <redacted>:42858] mod_wsgi (pid=554211): Failed to write response data: Broken pipe.

summary: - instances page fails to load if it takes more than 26 seconds
+ instances page fails to load if it takes more than 30 seconds
summary: - instances page fails to load if it takes more than 30 seconds
+ instances page fails to load if it takes more than 26 seconds
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

I did more testing around this, deployed a simple hello world apache2 + wsgi server on focal using the same versions as my horizon env:

apache2 2.4.41-4ubuntu3.15
libapache2-mod-wsgi-py3 4.6.8-1ubuntu3.1

created a simple python wsgi app:

import time
import logging

LOG = logging.getLogger(__name__)

def application(environ, start_response):
    status = '200 OK'
    output = b'Hello World!\n'
    LOG.error("before sleep")
    time.sleep(280)
    LOG.error("after sleep")
    response_headers = [('Content-type', 'text/plain'),
                        ('Content-Length', str(len(output)))]
    start_response(status, response_headers)

    return [output]

I set a very high timeout of 280 to test whether it would tolerate.

I pretty much copied the horizon apache2 config file in conf-enabled:

WSGIScriptAlias /myapp /usr/local/www/wsgi-scripts/myapp.wsgi process-group=ubuntu
WSGIDaemonProcess ubuntu user=ubuntu group=ubuntu processes=2 threads=10 display-name=%{GROUP} queue-timeout=300 socket-timeout=300 connect-timeout=300 request-timeout=300 inactivity-timeout=300 startup-timeout=300 deadlock-timeout=300 graceful-timeout=300 eviction-timeout=300 shutdown-timeout=300 restart-interval=300
WSGIProcessGroup ubuntu
WSGIApplicationGroup %{GLOBAL}

DocumentRoot /usr/local/www/documents
<Directory /usr/local/www/documents>
  Require all granted
</Directory>

<Directory /usr/local/www/wsgi-scripts>
  Require all granted
</Directory>

LogLevel debug

then curl -g -i -vvvvvvvvvvv localhost/myapp

The sleep of 280 secs was tolerated both without and with the custom timeout parameters in WSGIDaemonProcess. Therefore, the parameters are not needed and the defaults are able to accomodate for the sleep.

My conclusion at this point is that it is not the browser, it is not apache2, it is not wsgi, it is not curl. It has to be django or horizon itself.

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

Attaching a full list of django settings and their values printed at runtime. I don't see settings that could be related to the timeout (if I didn't miss it)

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

attaching call stack that starts the thread until the get_data method where I inserted the sleep

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :
Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote :

I did more testing today adjusting my hello world wsgi + apache2 project to django using the following version, which is the same focal version I have running horizon:

python3-django 2:2.2.12-1ubuntu0.20

I created a very simple page using this tutorial [1] and I put my sleep command there. I first tested a 45 seconds sleep using the django internal server "python3 manage.py runserver" and noticed it worked fine, it didn't drop at 30 seconds nor return an empty response. I then configured it to use apache2 and it also worked fine (had to do some python path adjustments beyond the tutorial to make it work in apache2). I then readjusted the timeout to 275 secs and it still worked, my hello world was printed just fine.

So now my conclusion is that this is an issue EXCLUSIVE to horizon. There is no timeout to adjust in wsgi, apache2 nor django (unless customized by horizon) that would result in the issue reported.

[1] https://djangoforbeginners.com/hello-world/

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to horizon (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/horizon/+/906547

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/horizon/+/906910

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on horizon (master)

Change abandoned by "Rodrigo Barbieri <email address hidden>" on branch: master
Review: https://review.opendev.org/c/openstack/horizon/+/906547

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to horizon (master)

Reviewed: https://review.opendev.org/c/openstack/horizon/+/906910
Committed: https://opendev.org/openstack/horizon/commit/95089025fda7c8cce6f7195e2a63f7f09efc9e0a
Submitter: "Zuul (22348)"
Branch: master

commit 95089025fda7c8cce6f7195e2a63f7f09efc9e0a
Author: Rodrigo Barbieri <email address hidden>
Date: Fri Jan 26 15:10:41 2024 -0300

    Extend configurable skippability of neutron calls to project instance detail

    The OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES config aids
    in envs struggling to load the instance list due to having
    too many ports or bad neutron plugin performance. However,
    the config does not apply its effect to the instance detail
    page, which cannot be loaded due to the neutron calls
    taking too long.

    This patch extends the config option to the instance
    detail page, allowing the same benefit (and side-effects)
    of the instance list page.

    Related-bug: #2045168
    Change-Id: I3e71a208a1c7212e168d63a259f2adddf27dbabf

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to horizon (stable/2023.2)

Related fix proposed to branch: stable/2023.2
Review: https://review.opendev.org/c/openstack/horizon/+/908910

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to horizon (stable/2023.2)

Reviewed: https://review.opendev.org/c/openstack/horizon/+/908910
Committed: https://opendev.org/openstack/horizon/commit/4de36bb649c514f50d2a958c9277097a08b23cec
Submitter: "Zuul (22348)"
Branch: stable/2023.2

commit 4de36bb649c514f50d2a958c9277097a08b23cec
Author: Rodrigo Barbieri <email address hidden>
Date: Fri Jan 26 15:10:41 2024 -0300

    Extend configurable skippability of neutron calls to project instance detail

    The OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES config aids
    in envs struggling to load the instance list due to having
    too many ports or bad neutron plugin performance. However,
    the config does not apply its effect to the instance detail
    page, which cannot be loaded due to the neutron calls
    taking too long.

    This patch extends the config option to the instance
    detail page, allowing the same benefit (and side-effects)
    of the instance list page.

    Related-bug: #2045168
    Change-Id: I3e71a208a1c7212e168d63a259f2adddf27dbabf
    (cherry picked from commit 95089025fda7c8cce6f7195e2a63f7f09efc9e0a)

Revision history for this message
Rodrigo Barbieri (rodrigo-barbieri2010) wrote (last edit ):

I am removing the Horizon from the affected projects because I found out that the issue is caused by haproxy. The charmed installation of horizon installs and configures haproxy under-the-hood (which I didn't know and had assumed that the installation was equivalent to upstream, but I was wrong. Usually charms have haproxy added optionally for HA, but the horizon charm is an exception) and that one is causing the 30 second limit. The problem is solved by configuring the charm with the setting

juju config openstack-dashboard haproxy-server-timeout=300000

no longer affects: horizon
Changed in charm-openstack-dashboard:
status: New → Invalid
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-openstack-dashboard (stable/xena)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/908988
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/7eb985778c9973581440214b05442baea1714d2f
Submitter: "Zuul (22348)"
Branch: stable/xena

commit 7eb985778c9973581440214b05442baea1714d2f
Author: Rodrigo Barbieri <email address hidden>
Date: Tue Jan 23 12:24:58 2024 -0300

    Allow configure of OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES

    If network calls to retrieve ports and floating IPs take too long,
    then the project > instances page cannot be loaded. This config
    allows disabling the network calls when loading the page with
    minor side-effects, as a workaround to avoid downtime while other
    performance optimizations can be done on the side to allow
    the page the load so the workaround is no longer needed.

    Closes-bug: #2051003
    Related-bug: #2045168
    Change-Id: Iedad6ef48cbe0b776594f4ad8276d3d713cd360c
    (cherry picked from commit 6b93e9dd8713f41fd50242499cc5413c26780714)
    (cherry picked from commit 45a86be78ab883fd40d1936c26876b734f0e263f)
    (cherry picked from commit 1ec179bb9d3faeac5d01660399671095ec452e69)
    (cherry picked from commit 9e2ae8e65b63185dbd579f957dcf14f5d3ee91cf)
    (cherry picked from commit b2af81e75d681142f31265fa6e457a7e19fc6717)

tags: added: in-stable-xena
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-openstack-dashboard (stable/wallaby)

Related fix proposed to branch: stable/wallaby
Review: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/910312

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (master)
Changed in charm-openstack-dashboard:
status: Invalid → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-openstack-dashboard (stable/wallaby)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/910312
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/560adf4ac34cded5eda456b43b35bfebb3311868
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit 560adf4ac34cded5eda456b43b35bfebb3311868
Author: Rodrigo Barbieri <email address hidden>
Date: Tue Jan 23 12:24:58 2024 -0300

    Allow configure of OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES

    If network calls to retrieve ports and floating IPs take too long,
    then the project > instances page cannot be loaded. This config
    allows disabling the network calls when loading the page with
    minor side-effects, as a workaround to avoid downtime while other
    performance optimizations can be done on the side to allow
    the page the load so the workaround is no longer needed.

    Closes-bug: #2051003
    Related-bug: #2045168
    Change-Id: Iedad6ef48cbe0b776594f4ad8276d3d713cd360c
    (cherry picked from commit 6b93e9dd8713f41fd50242499cc5413c26780714)
    (cherry picked from commit 45a86be78ab883fd40d1936c26876b734f0e263f)
    (cherry picked from commit 1ec179bb9d3faeac5d01660399671095ec452e69)
    (cherry picked from commit 9e2ae8e65b63185dbd579f957dcf14f5d3ee91cf)
    (cherry picked from commit b2af81e75d681142f31265fa6e457a7e19fc6717)
    (cherry picked from commit 7eb985778c9973581440214b05442baea1714d2f)

tags: added: in-stable-wallaby
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-openstack-dashboard (stable/victoria)

Related fix proposed to branch: stable/victoria
Review: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/911609

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-openstack-dashboard (stable/victoria)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/911609
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/5d4080d351106d7bc33d10300eb0bf5c27e2cc08
Submitter: "Zuul (22348)"
Branch: stable/victoria

commit 5d4080d351106d7bc33d10300eb0bf5c27e2cc08
Author: Rodrigo Barbieri <email address hidden>
Date: Tue Jan 23 12:24:58 2024 -0300

    Allow configure of OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES

    If network calls to retrieve ports and floating IPs take too long,
    then the project > instances page cannot be loaded. This config
    allows disabling the network calls when loading the page with
    minor side-effects, as a workaround to avoid downtime while other
    performance optimizations can be done on the side to allow
    the page the load so the workaround is no longer needed.

    Closes-bug: #2051003
    Related-bug: #2045168
    Change-Id: Iedad6ef48cbe0b776594f4ad8276d3d713cd360c
    (cherry picked from commit 6b93e9dd8713f41fd50242499cc5413c26780714)
    (cherry picked from commit 45a86be78ab883fd40d1936c26876b734f0e263f)
    (cherry picked from commit 1ec179bb9d3faeac5d01660399671095ec452e69)
    (cherry picked from commit 9e2ae8e65b63185dbd579f957dcf14f5d3ee91cf)
    (cherry picked from commit b2af81e75d681142f31265fa6e457a7e19fc6717)
    (cherry picked from commit 7eb985778c9973581440214b05442baea1714d2f)
    (cherry picked from commit 560adf4ac34cded5eda456b43b35bfebb3311868)

tags: added: in-stable-victoria
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-openstack-dashboard (stable/ussuri)

Related fix proposed to branch: stable/ussuri
Review: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/913813

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (master)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/911623
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/09c587116059319305b2a09cb393a084a52d1fc9
Submitter: "Zuul (22348)"
Branch: master

commit 09c587116059319305b2a09cb393a084a52d1fc9
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6

Changed in charm-openstack-dashboard:
status: In Progress → Fix Committed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/2023.2)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-openstack-dashboard (stable/ussuri)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/913813
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/46895dab0de5fab3db2f59d4566afcbb28540fd8
Submitter: "Zuul (22348)"
Branch: stable/ussuri

commit 46895dab0de5fab3db2f59d4566afcbb28540fd8
Author: Rodrigo Barbieri <email address hidden>
Date: Tue Jan 23 12:24:58 2024 -0300

    Allow configure of OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES

    If network calls to retrieve ports and floating IPs take too long,
    then the project > instances page cannot be loaded. This config
    allows disabling the network calls when loading the page with
    minor side-effects, as a workaround to avoid downtime while other
    performance optimizations can be done on the side to allow
    the page the load so the workaround is no longer needed.

    Closes-bug: #2051003
    Related-bug: #2045168
    Change-Id: Iedad6ef48cbe0b776594f4ad8276d3d713cd360c
    (cherry picked from commit 6b93e9dd8713f41fd50242499cc5413c26780714)
    (cherry picked from commit 45a86be78ab883fd40d1936c26876b734f0e263f)
    (cherry picked from commit 1ec179bb9d3faeac5d01660399671095ec452e69)
    (cherry picked from commit 9e2ae8e65b63185dbd579f957dcf14f5d3ee91cf)
    (cherry picked from commit b2af81e75d681142f31265fa6e457a7e19fc6717)
    (cherry picked from commit 7eb985778c9973581440214b05442baea1714d2f)
    (cherry picked from commit 560adf4ac34cded5eda456b43b35bfebb3311868)
    (cherry picked from commit 5d4080d351106d7bc33d10300eb0bf5c27e2cc08)

tags: added: in-stable-ussuri
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/2023.2)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/914121
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/2f430d15e64766fce4dc398ee23703a395693027
Submitter: "Zuul (22348)"
Branch: stable/2023.2

commit 2f430d15e64766fce4dc398ee23703a395693027
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6
    (cherry picked from commit 09c587116059319305b2a09cb393a084a52d1fc9)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/2023.1)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/2023.1)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/914453
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/f013b18c7680174dcbc4217030081a451347c8ef
Submitter: "Zuul (22348)"
Branch: stable/2023.1

commit f013b18c7680174dcbc4217030081a451347c8ef
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6
    (cherry picked from commit 09c587116059319305b2a09cb393a084a52d1fc9)
    (cherry picked from commit 2f430d15e64766fce4dc398ee23703a395693027)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/zed)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/zed)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/914663
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/f6f60a37ba99fd56fd02ac1a4c518703e9d9efca
Submitter: "Zuul (22348)"
Branch: stable/zed

commit f6f60a37ba99fd56fd02ac1a4c518703e9d9efca
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6
    (cherry picked from commit 09c587116059319305b2a09cb393a084a52d1fc9)
    (cherry picked from commit 2f430d15e64766fce4dc398ee23703a395693027)
    (cherry picked from commit f013b18c7680174dcbc4217030081a451347c8ef)

tags: added: in-stable-zed
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/yoga)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/yoga)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/914675
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/62777b150b1753a5a33e16c6dbfce24bc828a9dd
Submitter: "Zuul (22348)"
Branch: stable/yoga

commit 62777b150b1753a5a33e16c6dbfce24bc828a9dd
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6
    (cherry picked from commit 09c587116059319305b2a09cb393a084a52d1fc9)
    (cherry picked from commit 2f430d15e64766fce4dc398ee23703a395693027)
    (cherry picked from commit f013b18c7680174dcbc4217030081a451347c8ef)
    (cherry picked from commit f6f60a37ba99fd56fd02ac1a4c518703e9d9efca)

tags: added: in-stable-yoga
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/xena)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/xena)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/915008
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/fb63730e63fb09919d8b24c4ce34c77c85bb1317
Submitter: "Zuul (22348)"
Branch: stable/xena

commit fb63730e63fb09919d8b24c4ce34c77c85bb1317
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6
    (cherry picked from commit 09c587116059319305b2a09cb393a084a52d1fc9)
    (cherry picked from commit 2f430d15e64766fce4dc398ee23703a395693027)
    (cherry picked from commit f013b18c7680174dcbc4217030081a451347c8ef)
    (cherry picked from commit f6f60a37ba99fd56fd02ac1a4c518703e9d9efca)
    (cherry picked from commit 62777b150b1753a5a33e16c6dbfce24bc828a9dd)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/wallaby)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/wallaby)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/915147
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/9885182a9316c3ff9e73578989fb790ce4677416
Submitter: "Zuul (22348)"
Branch: stable/wallaby

commit 9885182a9316c3ff9e73578989fb790ce4677416
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6
    (cherry picked from commit 09c587116059319305b2a09cb393a084a52d1fc9)
    (cherry picked from commit 2f430d15e64766fce4dc398ee23703a395693027)
    (cherry picked from commit f013b18c7680174dcbc4217030081a451347c8ef)
    (cherry picked from commit f6f60a37ba99fd56fd02ac1a4c518703e9d9efca)
    (cherry picked from commit 62777b150b1753a5a33e16c6dbfce24bc828a9dd)
    (cherry picked from commit fb63730e63fb09919d8b24c4ce34c77c85bb1317)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/victoria)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/victoria)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/915344
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/a7a427ad6a11fe5ac3952d072a79fa11b9cda083
Submitter: "Zuul (22348)"
Branch: stable/victoria

commit a7a427ad6a11fe5ac3952d072a79fa11b9cda083
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6
    (cherry picked from commit 09c587116059319305b2a09cb393a084a52d1fc9)
    (cherry picked from commit 2f430d15e64766fce4dc398ee23703a395693027)
    (cherry picked from commit f013b18c7680174dcbc4217030081a451347c8ef)
    (cherry picked from commit f6f60a37ba99fd56fd02ac1a4c518703e9d9efca)
    (cherry picked from commit 62777b150b1753a5a33e16c6dbfce24bc828a9dd)
    (cherry picked from commit fb63730e63fb09919d8b24c4ce34c77c85bb1317)
    (cherry picked from commit 9885182a9316c3ff9e73578989fb790ce4677416)

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/ussuri)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/ussuri)

Reviewed: https://review.opendev.org/c/openstack/charm-openstack-dashboard/+/916087
Committed: https://opendev.org/openstack/charm-openstack-dashboard/commit/f268e2c4fdae1c0122cc1d3be82a1432746695c2
Submitter: "Zuul (22348)"
Branch: stable/ussuri

commit f268e2c4fdae1c0122cc1d3be82a1432746695c2
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300

    Adjust haproxy timeout to intended values

    Many years ago change Ida7949113594b9b859ab7b4ba8b2bb440bab6e7d
    attempted to change the timeouts of haproxy but did not succeed,
    as deployments were still using the values from the charm's
    templates/haproxy.cfg file, being effectively set to 30 seconds
    and causing timeouts (see bug). Additionally, the description
    of the config options became inaccurate, stating the default to
    be a value that they were really not.

    This patch addresses the timeout value discrepancy, adjusting
    to the original change's intended values.

    Closes-bug: #2045168
    Change-Id: I83405727b4a116ec6f47b61211bf8ef3d2d9fbd6
    (cherry picked from commit 09c587116059319305b2a09cb393a084a52d1fc9)
    (cherry picked from commit 2f430d15e64766fce4dc398ee23703a395693027)
    (cherry picked from commit f013b18c7680174dcbc4217030081a451347c8ef)
    (cherry picked from commit f6f60a37ba99fd56fd02ac1a4c518703e9d9efca)
    (cherry picked from commit 62777b150b1753a5a33e16c6dbfce24bc828a9dd)
    (cherry picked from commit fb63730e63fb09919d8b24c4ce34c77c85bb1317)
    (cherry picked from commit 9885182a9316c3ff9e73578989fb790ce4677416)
    (cherry picked from commit a7a427ad6a11fe5ac3952d072a79fa11b9cda083)

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.