Wrong haproxy-status on l3 agent node after restart
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Released
|
High
|
Ivan Suzdal |
Bug Description
Detailed bug description:
During swarm tests one of them failed with ostf check error with haproxy status.
After reset controller with l3-agent, we got wrong status of backends not restarted nodes in out of haproxy-status.sh on restarted node
Seems like after reset, controller with l3 agent can't get status of other controllers and don't update them later
restarted node:
http://
other controllers:
http://
Steps to reproduce:
1. Revert snapshot with neutron cluster
2. Create an instance with a key pair
3. Manually reschedule router from primary controller
to another one
4. Reset controller with l3-agent
5. Check l3-agent was rescheduled
6. Check network connectivity from instance via
dhcp namespace
7. Run OSTF
Expected results:
all steps passed
Actual result:
OSTF check fialed with message:
http://
Description of the environment:
Snapshot #76
logs:
https:/
description: | updated |
Changed in fuel: | |
milestone: | none → 9.1 |
assignee: | nobody → Fuel Sustaining (fuel-sustaining-team) |
tags: | added: area-library |
Changed in fuel: | |
status: | New → Confirmed |
importance: | Undecided → High |
tags: | added: 9.1-proposed |
Changed in fuel: | |
assignee: | Fuel Sustaining (fuel-sustaining-team) → Kyrylo Galanov (kgalanov) |
Changed in fuel: | |
assignee: | MOS Linux (mos-linux) → Ivan Suzdal (isuzdal) |
tags: | removed: area-library |
Changed in fuel: | |
status: | In Progress → Fix Committed |
Unfortunately it is haproxy bug. Status is UP via http and DOWN via unix socket: haproxy/ stats stdio paste.openstack .org/show/ 550407/
echo "show info;show stat;show table" | socat /var/lib/
--> http://
curl "http:// 10.109. 11.9:10000/ ;csv" paste.openstack .org/show/ 550409/
--> http://