[systest] Failover tests for RabbitMQ cluster should destroy nodes instead of suspend (pause)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Fix Released
|
High
|
Fuel QA Team | ||
7.0.x |
Won't Fix
|
High
|
Fuel QA Team |
Bug Description
In system tests we check HA for RabbitMQ cluster by suspending controllers and verifying it passes health checks, for example:
Then we resume suspended node and perform health checks once again:
This scenario is not applicable for real hardware environments and could cause additional issues for RabbitMQ recovering script, so recovering takes more time than usually:
Traceback (most recent call last):
File "/usr/lib/
testMethod()
File "/usr/lib/
self.
File "/home/
compatabili
File "/home/
func()
File "/home/
func(
File "/home/
result = func(*args, **kwargs)
File "/home/
super(
File "/home/
self.
File "/home/
result = func(*args, **kwargs)
File "/home/
interval=20, timeout=timeout)
File "/home/
return raising_predicate()
File "/home/
should_
File "/home/
result = func(*args, **kwargs)
File "/home/
failed_
File "/home/
result = func(*args, **kwargs)
File "/home/
indent=1)))
File "/home/
raise ASSERTION_
AssertionError: Failed 2 OSTF tests; should fail 0 tests. Names of failed tests: [
{
"RabbitMQ availability (failure)": "Time limit exceeded while waiting for to finish. Please refer to OpenStack logs for more details."
},
{
"RabbitMQ replication (failure)": "Failed to establish AMQP connection to 5673/tcp port on 10.109.7.4 from controller node! Please refer to OpenStack logs for more details."
}]
We need to replace suspend/resume actions by destroy/start in system tests for RabbitMQ failover.
no longer affects: | fuel/8.0.x |
tags: | added: area-qa |
Changed in fuel: | |
status: | Fix Committed → Fix Released |
tags: | added: non-release |
Fix proposed to branch: master /review. openstack. org/241961
Review: https:/