[2.2] VM creation with pod accepts the same hostname and push out the original VM
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Fix Released
|
High
|
Alberto Donato |
Bug Description
This may be a critical issue.
How to reproduce:
1. setup MAAS pod with virsh driver.
2. check VM list is empty
$ virsh list --all
Id Name State
-------
<empty>
3. define VM with hostname=foo
$ maas admin pod compose 117 cores=8 memory=16384 storage=default:128 hostname=foo
Success.
Machine-readable output follows:
{
"system_id": "sqhxte",
"resource_uri": "/MAAS/
}
4. check the VM "foo" is created.
$ virsh list --all
Id Name State
-------
4 foo running
5. define another VM with the same hostname=foo
$ maas admin pod compose 117 cores=8 memory=16384 storage=default:128 hostname=foo
Success.
Machine-readable output follows:
{
"system_id": "dgasff",
"resource_uri": "/MAAS/
}
6. expect 2 VMs in virsh list
$ virsh list --all
Id Name State
-------
5 foo running
However, only one VM is there. Please note that "Id" in virsh has been changed which means the original VM has been shutdown and pushed out.
In MAAS info, there are two VMs defined as foo.maas and rapid-swine.maas (randomly generated). But only the random one succeeded to commission, but foo.maas failed commission with timeout.
If this happens to an already used VM, we may loose the original VM.
Related branches
- MAAS Lander: Approve
- Blake Rouse (community): Approve
-
Diff: 118 lines (+43/-7)4 files modifiedsrc/maasserver/forms/pods.py (+9/-1)
src/maasserver/forms/tests/test_pods.py (+13/-0)
src/maasserver/models/bmc.py (+5/-6)
src/maasserver/models/tests/test_bmc.py (+16/-0)
tags: | added: cpe-onsite |
tags: | added: internal |
Changed in maas: | |
milestone: | none → 2.3.0beta3 |
importance: | Undecided → High |
status: | New → Triaged |
assignee: | nobody → Andres Rodriguez (andreserl) |
Changed in maas: | |
milestone: | 2.3.0beta3 → 2.3.0beta4 |
tags: | added: pod |
Changed in maas: | |
assignee: | Andres Rodriguez (andreserl) → Alberto Donato (ack) |
Changed in maas: | |
milestone: | 2.3.0beta4 → 2.3.0rc1 |
Changed in maas: | |
status: | Triaged → In Progress |
Changed in maas: | |
status: | In Progress → Fix Committed |
Changed in maas: | |
status: | Fix Committed → Fix Released |
Looks like disk images are still there: libvirt/ images/ 3a470755- a55a-4ba0- aff5-3ed279b02a c0 libvirt/ images/ 3cfe4037- 74ec-49b7- 81ea-09f119ccac 16 libvirt/ images/ 96329575- 72e5-44c6- 89b4-194d014db3 2f
-rw------- 1 root root 120G Aug 11 00:11 /var/lib/
-rw------- 1 root root 120G Sep 11 13:31 /var/lib/
-rw------- 1 root root 120G Sep 11 13:31 /var/lib/
(MAAS does not clean up disks when deleted from pod?)
But the behavior make the VM shut down at least and untraceable from MAAS.