instances page fails to load if it takes more than 26 seconds
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Dashboard Charm |
Fix Committed
|
Undecided
|
Unassigned |
Bug Description
Focal-ussuri customer env with lots of resources.
when trying to load the project>instance page, if the total amount of time loading data takes more than 26 seconds, the page enters a reload loop until the browser times out in 5 minutes.
The 26 seconds number was obtained in the following way:
1) 5 minute browser timeout was observed when trying to load the page
2) logs were inspected and noticed that some queries were taking very long, like glance ~12 secs, neutron ~8 seconds, etc. Queries to nova take at most 3 seconds.
3) in a separate env with zero resources where it would load instantly, I added a time.sleep in the api/glance.py file when invoking glance for images (glance is invoked multiple times when loading the instances page). Sleeping 14 seconds times out on 5 minutes, but sleeping 13 seconds does not timeout and loads quickly. When it times out with 14 seconds, I tailed the logs and noticed that the same group of requests were being repeated for a while, always starting with the flavors request. With the 13 seconds sleep the requests would not repeat.
4) Removed the sleep from the api/glance.py file and added a sleep of 26 secs in the project/
image_dict, flavor_dict, volume_dict = futurist_
With 26 seconds sleep it does not timeout nor repeat the requests, the page loads fine. But with 27 seconds sleep it times out on 5 minutes and keeps repeating the requests on the logs.
My conclusion is that the get_data method does not tolerate taking longer than 26 seconds to finish loading the page, and "reloads" itself, entering a loop that never finishes if the page cannot be loaded in less than 26 seconds.
Ideally this internal timeout that causes a reload loop should be configurable and more tolerant by default.
description: | updated |
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #1 |
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #2 |
I have tried all the parameters: queue-timeout=300 socket-timeout=300 connect-timeout=300 request-timeout=300 inactivity-
on WSGIDaemonProcess apache2 config but none worked to solve the issue
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #3 |
I did a bit more testing. By looking at the access.log file of apache2, I was able to see that if I try to load the page at T1 having configured a sleep of 35 seconds, I get a 200 OK line logged at T1+35, T1+70, T1+105, and so on.
Using TCPDUMP, I was able to confirm that my browser is re-sending requests every 30 seconds. So using the same example above, T1+30 seconds my browser re-sends the request, while T1+35 seconds the request is completed and printed in the access.log file, but it doesn't show in TCPDUMP as the response coming back (probably because it has been invalidated by the request being re-sent).
In the debug panel of Firefox (F12), it doesn't show multiple requests being sent, just the main request that times out after 5 minutes.
One other interesting thing, If I try to load the page, then restart apache2, the request isn't lost and the browser does not fail, it continues trying to load the page (I can see the request being re-sent in the logs and TCPDUMP). So apparently the request is not being managed by apache2, as it is being killed. Even if I do a systemctl stop and then start (to avoid restart just "reloading" instead of killing), the request still continues to be re-sent by the browser. Therefore apparently the connection and its 30 interval is being managed by the browser. I wonder if the 30 second value is something that is set on apache2 and provided to the browser via handshake.
Rodrigo Barbieri (rodrigo-barbieri2010) wrote (last edit ): | #4 |
I decided to test using curl (should have thought of this earlier) and now it seems we are back to apache2 being the culprit:
curl 'http://<redacted>
* Trying <redacted>:80...
* Connected to <redacted> (<redacted>) port 80 (#0)
> GET /project/instances/ HTTP/1.1
> Host: <redacted>
> User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0
> Accept: text/html,
> Accept-Language: en-US,en;q=0.5
> Accept-Encoding: gzip, deflate
> DNT: 1
> Connection: keep-alive
> Referer: http://<redacted>
> Cookie: csrftoken=
> Upgrade-
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
Basically curl is receiving an empty response at the 30 seconds mark, despite the server is working on the request until completion as we can see in the access.log file:
<redacted> - - [07/Dec/
even with all the parameters previously mentioned in comment #2, apache2 is responding within 30 seconds.
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #5 |
Adding loglevel debug to apache2 config does not print anything at the 30 second mark, but adds this log message when the request completes:
[Thu Dec 07 16:06:44.264144 2023] [wsgi:debug] [pid 554211:tid 140112348038912] src/server/
summary: |
- instances page fails to load if it takes more than 26 seconds + instances page fails to load if it takes more than 30 seconds |
summary: |
- instances page fails to load if it takes more than 30 seconds + instances page fails to load if it takes more than 26 seconds |
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #6 |
I did more testing around this, deployed a simple hello world apache2 + wsgi server on focal using the same versions as my horizon env:
apache2 2.4.41-4ubuntu3.15
libapache2-
created a simple python wsgi app:
import time
import logging
LOG = logging.
def application(
status = '200 OK'
output = b'Hello World!\n'
LOG.
time.sleep(280)
LOG.
response_
start_
return [output]
I set a very high timeout of 280 to test whether it would tolerate.
I pretty much copied the horizon apache2 config file in conf-enabled:
WSGIScriptAlias /myapp /usr/local/
WSGIDaemonProcess ubuntu user=ubuntu group=ubuntu processes=2 threads=10 display-
WSGIProcessGroup ubuntu
WSGIApplication
DocumentRoot /usr/local/
<Directory /usr/local/
Require all granted
</Directory>
<Directory /usr/local/
Require all granted
</Directory>
LogLevel debug
then curl -g -i -vvvvvvvvvvv localhost/myapp
The sleep of 280 secs was tolerated both without and with the custom timeout parameters in WSGIDaemonProcess. Therefore, the parameters are not needed and the defaults are able to accomodate for the sleep.
My conclusion at this point is that it is not the browser, it is not apache2, it is not wsgi, it is not curl. It has to be django or horizon itself.
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #7 |
Attaching a full list of django settings and their values printed at runtime. I don't see settings that could be related to the timeout (if I didn't miss it)
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #8 |
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #9 |
attaching call stack that starts the thread until the get_data method where I inserted the sleep
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #10 |
Rodrigo Barbieri (rodrigo-barbieri2010) wrote : | #11 |
I did more testing today adjusting my hello world wsgi + apache2 project to django using the following version, which is the same focal version I have running horizon:
python3-django 2:2.2.12-
I created a very simple page using this tutorial [1] and I put my sleep command there. I first tested a 45 seconds sleep using the django internal server "python3 manage.py runserver" and noticed it worked fine, it didn't drop at 30 seconds nor return an empty response. I then configured it to use apache2 and it also worked fine (had to do some python path adjustments beyond the tutorial to make it work in apache2). I then readjusted the timeout to 275 secs and it still worked, my hello world was printed just fine.
So now my conclusion is that this is an issue EXCLUSIVE to horizon. There is no timeout to adjust in wsgi, apache2 nor django (unless customized by horizon) that would result in the issue reported.
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to horizon (master) | #12 |
Related fix proposed to branch: master
Review: https:/
OpenStack Infra (hudson-openstack) wrote : | #13 |
Related fix proposed to branch: master
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Change abandoned on horizon (master) | #14 |
Change abandoned by "Rodrigo Barbieri <email address hidden>" on branch: master
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix merged to horizon (master) | #15 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: master
commit 95089025fda7c8c
Author: Rodrigo Barbieri <email address hidden>
Date: Fri Jan 26 15:10:41 2024 -0300
Extend configurable skippability of neutron calls to project instance detail
The OPENSTACK_
in envs struggling to load the instance list due to having
too many ports or bad neutron plugin performance. However,
the config does not apply its effect to the instance detail
page, which cannot be loaded due to the neutron calls
taking too long.
This patch extends the config option to the instance
detail page, allowing the same benefit (and side-effects)
of the instance list page.
Related-bug: #2045168
Change-Id: I3e71a208a1c721
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to horizon (stable/2023.2) | #16 |
Related fix proposed to branch: stable/2023.2
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix merged to horizon (stable/2023.2) | #17 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/2023.2
commit 4de36bb649c514f
Author: Rodrigo Barbieri <email address hidden>
Date: Fri Jan 26 15:10:41 2024 -0300
Extend configurable skippability of neutron calls to project instance detail
The OPENSTACK_
in envs struggling to load the instance list due to having
too many ports or bad neutron plugin performance. However,
the config does not apply its effect to the instance detail
page, which cannot be loaded due to the neutron calls
taking too long.
This patch extends the config option to the instance
detail page, allowing the same benefit (and side-effects)
of the instance list page.
Related-bug: #2045168
Change-Id: I3e71a208a1c721
(cherry picked from commit 95089025fda7c8c
Rodrigo Barbieri (rodrigo-barbieri2010) wrote (last edit ): | #18 |
I am removing the Horizon from the affected projects because I found out that the issue is caused by haproxy. The charmed installation of horizon installs and configures haproxy under-the-hood (which I didn't know and had assumed that the installation was equivalent to upstream, but I was wrong. Usually charms have haproxy added optionally for HA, but the horizon charm is an exception) and that one is causing the 30 second limit. The problem is solved by configuring the charm with the setting
juju config openstack-dashboard haproxy-
no longer affects: | horizon |
Changed in charm-openstack-dashboard: | |
status: | New → Invalid |
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-openstack-dashboard (stable/xena) | #19 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/xena
commit 7eb985778c99735
Author: Rodrigo Barbieri <email address hidden>
Date: Tue Jan 23 12:24:58 2024 -0300
Allow configure of OPENSTACK_
If network calls to retrieve ports and floating IPs take too long,
then the project > instances page cannot be loaded. This config
allows disabling the network calls when loading the page with
minor side-effects, as a workaround to avoid downtime while other
performance optimizations can be done on the side to allow
the page the load so the workaround is no longer needed.
Closes-bug: #2051003
Related-bug: #2045168
Change-Id: Iedad6ef48cbe0b
(cherry picked from commit 6b93e9dd8713f41
(cherry picked from commit 45a86be78ab883f
(cherry picked from commit 1ec179bb9d3faea
(cherry picked from commit 9e2ae8e65b63185
(cherry picked from commit b2af81e75d68114
tags: | added: in-stable-xena |
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-openstack-dashboard (stable/wallaby) | #20 |
Related fix proposed to branch: stable/wallaby
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (master) | #21 |
Fix proposed to branch: master
Review: https:/
Changed in charm-openstack-dashboard: | |
status: | Invalid → In Progress |
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-openstack-dashboard (stable/wallaby) | #22 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/wallaby
commit 560adf4ac34cded
Author: Rodrigo Barbieri <email address hidden>
Date: Tue Jan 23 12:24:58 2024 -0300
Allow configure of OPENSTACK_
If network calls to retrieve ports and floating IPs take too long,
then the project > instances page cannot be loaded. This config
allows disabling the network calls when loading the page with
minor side-effects, as a workaround to avoid downtime while other
performance optimizations can be done on the side to allow
the page the load so the workaround is no longer needed.
Closes-bug: #2051003
Related-bug: #2045168
Change-Id: Iedad6ef48cbe0b
(cherry picked from commit 6b93e9dd8713f41
(cherry picked from commit 45a86be78ab883f
(cherry picked from commit 1ec179bb9d3faea
(cherry picked from commit 9e2ae8e65b63185
(cherry picked from commit b2af81e75d68114
(cherry picked from commit 7eb985778c99735
tags: | added: in-stable-wallaby |
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-openstack-dashboard (stable/victoria) | #23 |
Related fix proposed to branch: stable/victoria
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-openstack-dashboard (stable/victoria) | #24 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/victoria
commit 5d4080d351106d7
Author: Rodrigo Barbieri <email address hidden>
Date: Tue Jan 23 12:24:58 2024 -0300
Allow configure of OPENSTACK_
If network calls to retrieve ports and floating IPs take too long,
then the project > instances page cannot be loaded. This config
allows disabling the network calls when loading the page with
minor side-effects, as a workaround to avoid downtime while other
performance optimizations can be done on the side to allow
the page the load so the workaround is no longer needed.
Closes-bug: #2051003
Related-bug: #2045168
Change-Id: Iedad6ef48cbe0b
(cherry picked from commit 6b93e9dd8713f41
(cherry picked from commit 45a86be78ab883f
(cherry picked from commit 1ec179bb9d3faea
(cherry picked from commit 9e2ae8e65b63185
(cherry picked from commit b2af81e75d68114
(cherry picked from commit 7eb985778c99735
(cherry picked from commit 560adf4ac34cded
tags: | added: in-stable-victoria |
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to charm-openstack-dashboard (stable/ussuri) | #25 |
Related fix proposed to branch: stable/ussuri
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (master) | #26 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: master
commit 09c587116059319
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
Changed in charm-openstack-dashboard: | |
status: | In Progress → Fix Committed |
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/2023.2) | #27 |
Fix proposed to branch: stable/2023.2
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Related fix merged to charm-openstack-dashboard (stable/ussuri) | #28 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/ussuri
commit 46895dab0de5fab
Author: Rodrigo Barbieri <email address hidden>
Date: Tue Jan 23 12:24:58 2024 -0300
Allow configure of OPENSTACK_
If network calls to retrieve ports and floating IPs take too long,
then the project > instances page cannot be loaded. This config
allows disabling the network calls when loading the page with
minor side-effects, as a workaround to avoid downtime while other
performance optimizations can be done on the side to allow
the page the load so the workaround is no longer needed.
Closes-bug: #2051003
Related-bug: #2045168
Change-Id: Iedad6ef48cbe0b
(cherry picked from commit 6b93e9dd8713f41
(cherry picked from commit 45a86be78ab883f
(cherry picked from commit 1ec179bb9d3faea
(cherry picked from commit 9e2ae8e65b63185
(cherry picked from commit b2af81e75d68114
(cherry picked from commit 7eb985778c99735
(cherry picked from commit 560adf4ac34cded
(cherry picked from commit 5d4080d351106d7
tags: | added: in-stable-ussuri |
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/2023.2) | #29 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/2023.2
commit 2f430d15e64766f
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
(cherry picked from commit 09c587116059319
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/2023.1) | #30 |
Fix proposed to branch: stable/2023.1
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/2023.1) | #31 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/2023.1
commit f013b18c7680174
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
(cherry picked from commit 09c587116059319
(cherry picked from commit 2f430d15e64766f
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/zed) | #32 |
Fix proposed to branch: stable/zed
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/zed) | #33 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/zed
commit f6f60a37ba99fd5
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
(cherry picked from commit 09c587116059319
(cherry picked from commit 2f430d15e64766f
(cherry picked from commit f013b18c7680174
tags: | added: in-stable-zed |
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/yoga) | #34 |
Fix proposed to branch: stable/yoga
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/yoga) | #35 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/yoga
commit 62777b150b1753a
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
(cherry picked from commit 09c587116059319
(cherry picked from commit 2f430d15e64766f
(cherry picked from commit f013b18c7680174
(cherry picked from commit f6f60a37ba99fd5
tags: | added: in-stable-yoga |
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/xena) | #36 |
Fix proposed to branch: stable/xena
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/xena) | #37 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/xena
commit fb63730e63fb099
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
(cherry picked from commit 09c587116059319
(cherry picked from commit 2f430d15e64766f
(cherry picked from commit f013b18c7680174
(cherry picked from commit f6f60a37ba99fd5
(cherry picked from commit 62777b150b1753a
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/wallaby) | #38 |
Fix proposed to branch: stable/wallaby
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/wallaby) | #39 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/wallaby
commit 9885182a9316c3f
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
(cherry picked from commit 09c587116059319
(cherry picked from commit 2f430d15e64766f
(cherry picked from commit f013b18c7680174
(cherry picked from commit f6f60a37ba99fd5
(cherry picked from commit 62777b150b1753a
(cherry picked from commit fb63730e63fb099
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/victoria) | #40 |
Fix proposed to branch: stable/victoria
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/victoria) | #41 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/victoria
commit a7a427ad6a11fe5
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
(cherry picked from commit 09c587116059319
(cherry picked from commit 2f430d15e64766f
(cherry picked from commit f013b18c7680174
(cherry picked from commit f6f60a37ba99fd5
(cherry picked from commit 62777b150b1753a
(cherry picked from commit fb63730e63fb099
(cherry picked from commit 9885182a9316c3f
OpenStack Infra (hudson-openstack) wrote : Fix proposed to charm-openstack-dashboard (stable/ussuri) | #42 |
Fix proposed to branch: stable/ussuri
Review: https:/
OpenStack Infra (hudson-openstack) wrote : Fix merged to charm-openstack-dashboard (stable/ussuri) | #43 |
Reviewed: https:/
Committed: https:/
Submitter: "Zuul (22348)"
Branch: stable/ussuri
commit f268e2c4fdae1c0
Author: Rodrigo Barbieri <email address hidden>
Date: Wed Mar 6 14:59:11 2024 -0300
Adjust haproxy timeout to intended values
Many years ago change Ida7949113594b9
attempted to change the timeouts of haproxy but did not succeed,
as deployments were still using the values from the charm's
templates/
and causing timeouts (see bug). Additionally, the description
of the config options became inaccurate, stating the default to
be a value that they were really not.
This patch addresses the timeout value discrepancy, adjusting
to the original change's intended values.
Closes-bug: #2045168
Change-Id: I83405727b4a116
(cherry picked from commit 09c587116059319
(cherry picked from commit 2f430d15e64766f
(cherry picked from commit f013b18c7680174
(cherry picked from commit f6f60a37ba99fd5
(cherry picked from commit 62777b150b1753a
(cherry picked from commit fb63730e63fb099
(cherry picked from commit 9885182a9316c3f
(cherry picked from commit a7a427ad6a11fe5
I logged the call stack around the sleep and compared the call stack of the first call with the one that loops, and they are identical, but on different threads. Apparently, it is wsgi or django that is spawning a new thread when it takes more than 30 seconds for the previous thread to finish.