/etc/hosts file was not updated after nodes removing

Bug #1543427 reported by Dmitry Belyaninov
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
New
Undecided
Unassigned

Bug Description

Scenario:
Deploy cluster: 3 controllers + 1 compute, Neutron VLAN
Remove 2 controllers, re-deploy

after that hosts files will contain info about removed controllers:

[root@nailgun ~]# fuel nodes
id | status | name | cluster | ip | mac | roles | pending_roles | online | group_id
---|----------|------------------|---------|-------------|-------------------|------------|---------------|--------|---------
6 | discover | Untitled (76:6f) | None | 10.109.0.10 | 64:90:a6:46:76:6f | | | True | None
1 | ready | Untitled (0d:02) | 1 | 10.109.0.3 | 64:d1:b2:df:0d:02 | controller | | True | 1
4 | ready | Untitled (ba:88) | 1 | 10.109.0.7 | 64:21:6c:87:ba:88 | compute | | True | 1
9 | discover | Untitled (76:a5) | None | 10.109.0.9 | 64:2a:59:ad:76:a5 | | | True | None
11 | discover | Untitled (f6:c7) | None | 10.109.0.6 | 64:69:cd:97:f6:c7 | | | True | None
5 | discover | Untitled (de:b8) | None | 10.109.0.4 | 64:0d:d3:ea:de:b8 | | | True | None
7 | discover | Untitled (61:3e) | None | 10.109.0.11 | 64:bd:7c:34:61:3e | | | True | None
8 | discover | Untitled (87:d7) | None | 10.109.0.8 | 64:9b:bc:ae:87:d7 | | | True | None
10 | discover | Untitled (55:ce) | None | 10.109.0.5 | 64:e6:3a:d7:55:ce | | | True | None
[root@nailgun ~]# ssh node-4
Warning: Permanently added 'node-4' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-77-generic x86_64)

 * Documentation: https://help.ubuntu.com/
Last login: Tue Feb 9 05:57:08 2016 from 10.109.0.2
root@node-4:~#
root@node-4:~# cat /etc/hosts
# HEADER: This file was autogenerated at 2016-02-08 16:21:25 +0000
# HEADER: by puppet. While it can still be managed manually, it
# HEADER: is definitely not recommended.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.109.1.5 node-4.test.domain.local node-4
10.109.1.7 node-2.test.domain.local node-2
10.109.1.6 node-3.test.domain.local node-3
10.109.1.6 messaging-node-3.test.domain.local messaging-node-3
10.109.1.4 messaging-node-1.test.domain.local messaging-node-1
10.109.1.7 messaging-node-2.test.domain.local messaging-node-2
10.109.1.4 node-1.test.domain.local node-1
10.109.1.5 messaging-node-4.test.domain.local messaging-node-4
root@node-4:~# exit
logout
Connection to node-4 closed.
[root@nailgun ~]# ssh node-1
Warning: Permanently added 'node-1' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-77-generic x86_64)

 * Documentation: https://help.ubuntu.com/
root@node-1:~# cat /etc/hosts
# HEADER: This file was autogenerated at 2016-02-08 15:21:52 +0000
# HEADER: by puppet. While it can still be managed manually, it
# HEADER: is definitely not recommended.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.109.1.5 node-4.test.domain.local node-4
10.109.1.7 node-2.test.domain.local node-2
10.109.1.6 node-3.test.domain.local node-3
10.109.1.6 messaging-node-3.test.domain.local messaging-node-3
10.109.1.4 messaging-node-1.test.domain.local messaging-node-1
10.109.1.7 messaging-node-2.test.domain.local messaging-node-2
10.109.1.4 node-1.test.domain.local node-1
10.109.1.5 messaging-node-4.test.domain.local messaging-node-4
root@node-1:~#
root@node-1:~# cat /etc/corosync/corosync.conf
compatibility: whitetank

quorum {
  provider: corosync_votequorum
        two_node: 0
   }

nodelist {
  node {
    # node-1.test.domain.local
    ring0_addr: 10.109.1.4
    nodeid: 1
  }
}
..........

VERSION:
  feature_groups:
    - mirantis
  production: "docker"
  release: "8.0"
  api: "1.0"
  build_number: "529"
  build_id: "529"
  fuel-nailgun_sha: "baec8643ca624e52b37873f2dbd511c135d236d9"
  python-fuelclient_sha: "4f234669cfe88a9406f4e438b1e1f74f1ef484a5"
  fuel-agent_sha: "658be72c4b42d3e1436b86ac4567ab914bfb451b"
  fuel-nailgun-agent_sha: "b2bb466fd5bd92da614cdbd819d6999c510ebfb1"
  astute_sha: "b81577a5b7857c4be8748492bae1dec2fa89b446"
  fuel-library_sha: "e2d79330d5d708796330fac67722c21f85569b87"
  fuel-ostf_sha: "3bc76a63a9e7d195ff34eadc29552f4235fa6c52"
  fuel-mirror_sha: "fb45b80d7bee5899d931f926e5c9512e2b442749"
  fuelmenu_sha: "e071216cb214e34b4d861478033425ee6a54a3be"
  shotgun_sha: "63645dea384a37dde5c01d4f8905566978e5d906"
  network-checker_sha: "a43cf96cd9532f10794dce736350bf5bed350e9d"
  fuel-upgrade_sha: "616a7490ec7199f69759e97e42f9b97dfc87e85b"
  fuelmain_sha: "a365f05b903368225da3fea9aa42afc1d50dc9b4"

Revision history for this message
Dmitry Belyaninov (dbelyaninov) wrote :

It is strange but I can't generate snapshot, contact me ASAP, before cluster alive.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.