connections to HA-enabled services are not closed properly

Bug #1756087 reported by Junien F
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Nova Cloud Controller Charm
Triaged
Low
Unassigned

Bug Description

Hi,

Running a xenial/mitaka cloud with 17.11 charms, HA-enabled (with 3 units).

I hooked haproxy logs to a centralized logging system, and thought it'd be nice to have statistics on haproxy termination states. I realized with horror that > 50% of the haproxy connections ended in a "cD" state, which means that the timeout was reached (if you have the same client and server timeout, haproxy will always report a cD state if this timeout was reached, AFAIK).

I investigated more, and found that this is actually benign : the various API calls (from one openstack service to another, say from neutron-api to keystone) are made with "Connection : keep-alive", but the connection is not closed properly by the client, so it stays up for a while until the server closes it. See the wireshark screenshot attached (notice the last packets, sent 1 min and 2 min after the last data byte).

This is causing sockets to be left open needlessly, and also is making haproxy logs useless, which prevents using them in any kind of dashboard.

Thanks

Revision history for this message
Junien F (axino) wrote :
Revision history for this message
Junien F (axino) wrote :

Note that this is still happening on xenial/queens with 18.02 charms.

James Page (james-page)
Changed in charm-nova-cloud-controller:
status: New → Triaged
importance: Undecided → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.