[SWARM][8.0] "Remove controllers" test detect outdated information in /etc/hosts file

Bug #1610157 reported by Vladimir Jigulin
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Won't Fix
Medium
Alexey Stupnikov

Bug Description

Reproduced on CI: https://patching-ci.infra.mirantis.net/job/8.0.acceptance.ubuntu.ha_scale_group_2/8/testReport/(root)/remove_controllers/remove_controllers/

Steps to reproduce:
1. Create cluster
2. Add 3 controller, 1 compute
3. Deploy the cluster
4. Remove 2 controllers
5. Deploy changes
6. Check /etc/hosts that removed nodes aren't present

Expected results:
Information about removed controllers must be deleted on other controllers in /etc/hosts file

Actual results:
/etc/hosts file on non removed controller contain information about removed controllers

Place in test:
https://github.com/openstack/fuel-qa/blob/stable/8.0/fuelweb_test/tests/tests_scale/test_scale_group_2.py#L174

Error message:
host node-2.test.domain.local is present in /etc/hosts

Reproducibility: 100%

Changed in fuel:
milestone: none → 8.0-updates
assignee: nobody → MOS Maintenance (mos-maintenance)
importance: Undecided → High
description: updated
Changed in fuel:
assignee: MOS Maintenance (mos-maintenance) → Alexey Stupnikov (astupnikov)
Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

I have checked another scenario: deployed one controller and then added 2 computes. As a result I have had correct /etc/hosts contents on all nodes. It looks like this bug is specific to node's delete process.

Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

Confirmed for MOS8. Lowering this bug's severity to Medium, since IMO it doesn't break anything.

Changed in fuel:
status: New → Confirmed
importance: High → Medium
Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

This bug was fixed during 9.0 release cycle (I can't reproduce it for clean MOS 9.0 env). We have discussed this issue on mos-maintenance team sync up and decided that I will try to find an easy fix. If it will take too much time to fix this issue, bug should be closed as won't fix.

Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

Steps to reproduce:
1. Deploy env with 3 controllers and 2 computes.
2. Check /etc/hosts contents on any controller. There should be 2 records for every slave node.
3. Delete 2 controllers, deploy changes.
4. Check /etc/hosts contents.

Expected result: no records for deleted hosts.

Actual result: there are records for deleted hosts.

Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

OK, this bug is a duplicate of bug #1513401. It is marked as won't fix, since the fix depends on feature's backport. Since this swarm issue is non-critical, we can't backport a feature to solve it. Closing as won't fix and marking as a duplicate of #1513401.

Changed in fuel:
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.