[Reduced footprint] After node with virt,compute is rebooted, VM with controller receives new MAC
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Mirantis OpenStack |
Fix Released
|
Critical
|
Bartosz Kupidura | ||
7.0.x |
Fix Released
|
Critical
|
Bartosz Kupidura |
Bug Description
VERSION:
feature_groups:
- mirantis
- advanced
production: "docker"
release: "7.0"
openstack_
api: "1.0"
build_number: "260"
build_id: "260"
nailgun_sha: "3de0f32fe9e09f
python-
fuel-agent_sha: "082a47bf014002
fuel-
astute_sha: "53c86cba593ddb
fuel-library_sha: "e055af9dee6fba
fuel-ostf_sha: "582a81ccaa1e43
fuelmain_sha: "994bb9a8a2a3c4
Scenario:
1. Deploy Tun env
2. Add 3 compute,virt nodes
3. Add one VMs' config on each compute,virt node
4. Lauch VMs
5. Add 3 controllers on KVM
6. Deploy cluster
7. Forse reboot virt,compute node
Actual result:
1. The VM is running:
root@node-3:~# virsh list --all
Id Name State
---
2 1_vm running
2. Fuel didn't see it:
id | status | name | cluster | ip | mac | roles | pending_roles | online | group_id
---
13 | ready | Untitled (0d:c3) | 2 | 172.16.40.83 | 52:54:00:63:0d:c3 | controller | | False | 2
14 | ready | Untitled (43:92) | 2 | 172.16.40.82 | 52:54:00:84:43:92 | controller | | True | 2
3 | ready | Untitled (8d:a0) | 2 | 172.16.40.73 | 0c:c4:7a:17:8d:a0 | compute, virt | | True | 2
5 | ready | Untitled (00:e0) | 2 | 172.16.40.76 | 0c:c4:7a:15:00:e0 | compute, virt | | True | 2
4 | ready | Untitled (01:3c) | 2 | 172.16.40.75 | 0c:c4:7a:15:01:3c | compute, virt | | True | 2
12 | ready | Untitled (2d:ef) | 2 | 172.16.40.81 | 52:54:00:dc:2d:ef | controller | | True | 2
3. Services are down
root@node-14:~# nova-manage service list
Binary Host Zone Status State Updated_At
nova-consoleauth node-12.domain.tld internal enabled :-) 2015-09-02 12:26:45
nova-scheduler node-12.domain.tld internal enabled :-) 2015-09-02 12:26:45
nova-conductor node-12.domain.tld internal enabled :-) 2015-09-02 12:26:17
nova-cert node-12.domain.tld internal enabled :-) 2015-09-02 12:26:15
nova-consoleauth node-14.domain.tld internal enabled :-) 2015-09-02 12:26:14
nova-scheduler node-14.domain.tld internal enabled :-) 2015-09-02 12:26:14
nova-conductor node-14.domain.tld internal enabled :-) 2015-09-02 12:26:18
nova-cert node-14.domain.tld internal enabled :-) 2015-09-02 12:26:25
nova-consoleauth node-13.domain.tld internal enabled XXX 2015-09-02 06:28:25
nova-scheduler node-13.domain.tld internal enabled XXX 2015-09-02 06:28:26
nova-conductor node-13.domain.tld internal enabled XXX 2015-09-02 06:28:26
nova-cert node-13.domain.tld internal enabled XXX 2015-09-02 06:28:37
nova-compute node-5.domain.tld nova enabled :-) 2015-09-02 12:26:25
nova-compute node-4.domain.tld nova enabled :-) 2015-09-02 12:26:42
nova-compute node-3.domain.tld nova enabled :-) 2015-09-02 12:26:20
4. node-13 is stopped in Corosync:
root@node-14:~# crm status
Last updated: Wed Sep 2 12:27:09 2015
Last change: Tue Sep 1 22:27:04 2015
Stack: corosync
Current DC: node-12.domain.tld (12) - partition with quorum
Version: 1.1.12-561c4cf
3 Nodes configured
43 Resources configured
Online: [ node-12.domain.tld node-14.domain.tld ]
OFFLINE: [ node-13.domain.tld ]
Clone Set: clone_p_vrouter [p_vrouter]
Started: [ node-12.domain.tld node-14.domain.tld ]
vip__management (ocf::fuel:
vip__vrouter_pub (ocf::fuel:
vip__vrouter (ocf::fuel:
vip__public (ocf::fuel:
Master/Slave Set: master_p_conntrackd [p_conntrackd]
Masters: [ node-12.domain.tld ]
Slaves: [ node-14.domain.tld ]
Clone Set: clone_p_haproxy [p_haproxy]
Started: [ node-12.domain.tld node-14.domain.tld ]
Clone Set: clone_p_mysql [p_mysql]
Started: [ node-12.domain.tld node-14.domain.tld ]
Master/Slave Set: master_
Masters: [ node-12.domain.tld ]
Slaves: [ node-14.domain.tld ]
Clone Set: clone_p_dns [p_dns]
Started: [ node-12.domain.tld node-14.domain.tld ]
Clone Set: clone_p_heat-engine [p_heat-engine]
Started: [ node-12.domain.tld node-14.domain.tld ]
Clone Set: clone_p_
Started: [ node-12.domain.tld node-14.domain.tld ]
Clone Set: clone_p_
Started: [ node-12.domain.tld node-14.domain.tld ]
Clone Set: clone_p_
Started: [ node-12.domain.tld node-14.domain.tld ]
Clone Set: clone_p_
Started: [ node-12.domain.tld node-14.domain.tld ]
Clone Set: clone_p_ntp [p_ntp]
Started: [ node-12.domain.tld node-14.domain.tld ]
Clone Set: clone_ping_
Started: [ node-12.domain.tld node-14.domain.tld
MACs on VMs' interfaces are updated after compute host was rebooted and the node lose connection
description: | updated |
Changed in mos: | |
status: | Confirmed → In Progress |
Changed in mos: | |
status: | In Progress → Fix Committed |
Hi, I just tested it on lab. Problem occurred cause we have no mac specified for virtual machine. So, to have static mac after the reboot we should define virtual machine using our template, then make `virsh dumpxml > /etc/libirt/ autostart/ ${vm_name} ` to have new node's xml with generated macs.