[2.2] VM creation with pod accepts the same hostname and push out the original VM

Bug #1716328 reported by Nobuto Murata
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MAAS
Fix Released
High
Alberto Donato

Bug Description

This may be a critical issue.

How to reproduce:

1. setup MAAS pod with virsh driver.

2. check VM list is empty

$ virsh list --all
 Id Name State
----------------------------------------------------
<empty>

3. define VM with hostname=foo

$ maas admin pod compose 117 cores=8 memory=16384 storage=default:128 hostname=foo
Success.
Machine-readable output follows:
{
    "system_id": "sqhxte",
    "resource_uri": "/MAAS/api/2.0/machines/sqhxte/"
}

4. check the VM "foo" is created.

$ virsh list --all
 Id Name State
----------------------------------------------------
 4 foo running

5. define another VM with the same hostname=foo

$ maas admin pod compose 117 cores=8 memory=16384 storage=default:128 hostname=foo
Success.
Machine-readable output follows:
{
    "system_id": "dgasff",
    "resource_uri": "/MAAS/api/2.0/machines/dgasff/"
}

6. expect 2 VMs in virsh list

$ virsh list --all
 Id Name State
----------------------------------------------------
 5 foo running

However, only one VM is there. Please note that "Id" in virsh has been changed which means the original VM has been shutdown and pushed out.

In MAAS info, there are two VMs defined as foo.maas and rapid-swine.maas (randomly generated). But only the random one succeeded to commission, but foo.maas failed commission with timeout.

If this happens to an already used VM, we may loose the original VM.

Related branches

Revision history for this message
Nobuto Murata (nobuto) wrote :

Looks like disk images are still there:
-rw------- 1 root root 120G Aug 11 00:11 /var/lib/libvirt/images/3a470755-a55a-4ba0-aff5-3ed279b02ac0
-rw------- 1 root root 120G Sep 11 13:31 /var/lib/libvirt/images/3cfe4037-74ec-49b7-81ea-09f119ccac16
-rw------- 1 root root 120G Sep 11 13:31 /var/lib/libvirt/images/96329575-72e5-44c6-89b4-194d014db32f

(MAAS does not clean up disks when deleted from pod?)

But the behavior make the VM shut down at least and untraceable from MAAS.

Nobuto Murata (nobuto)
tags: added: cpe-onsite
tags: added: internal
Changed in maas:
milestone: none → 2.3.0beta3
importance: Undecided → High
status: New → Triaged
assignee: nobody → Andres Rodriguez (andreserl)
Changed in maas:
milestone: 2.3.0beta3 → 2.3.0beta4
tags: added: pod
Changed in maas:
assignee: Andres Rodriguez (andreserl) → Alberto Donato (ack)
Revision history for this message
Alberto Donato (ack) wrote :

It seems something changed since the bug was reporter.
Specifying an existing hostname (or an invalid one) just makes maas generate a petname, so the new machine is created without error.

Should we raise an error in those cases instead?

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote : Re: [Bug 1716328] Re: [2.2] VM creation with pod accepts the same hostname and push out the original VM

Might be better to error out in my view rather than to hide it.

On 27 Oct 2017 1:42 pm, "Alberto Donato" <email address hidden> wrote:

It seems something changed since the bug was reporter.
Specifying an existing hostname (or an invalid one) just makes maas
generate a petname, so the new machine is created without error.

Should we raise an error in those cases instead?

--
You received this bug notification because you are subscribed to the bug
report.
https://bugs.launchpad.net/bugs/1716328

Title:
  [2.2] VM creation with pod accepts the same hostname and push out the
  original VM

Status in MAAS:
  Triaged

Bug description:
  This may be a critical issue.

  How to reproduce:

  1. setup MAAS pod with virsh driver.

  2. check VM list is empty

  $ virsh list --all
   Id Name State
  ----------------------------------------------------
  <empty>

  3. define VM with hostname=foo

  $ maas admin pod compose 117 cores=8 memory=16384 storage=default:128
hostname=foo
  Success.
  Machine-readable output follows:
  {
      "system_id": "sqhxte",
      "resource_uri": "/MAAS/api/2.0/machines/sqhxte/"
  }

  4. check the VM "foo" is created.

  $ virsh list --all
   Id Name State
  ----------------------------------------------------
   4 foo running

  5. define another VM with the same hostname=foo

  $ maas admin pod compose 117 cores=8 memory=16384 storage=default:128
hostname=foo
  Success.
  Machine-readable output follows:
  {
      "system_id": "dgasff",
      "resource_uri": "/MAAS/api/2.0/machines/dgasff/"
  }

  6. expect 2 VMs in virsh list

  $ virsh list --all
   Id Name State
  ----------------------------------------------------
   5 foo running

  However, only one VM is there. Please note that "Id" in virsh has been
changed which means the original VM has been shutdown and pushed out.

  In MAAS info, there are two VMs defined as foo.maas and rapid-
  swine.maas (randomly generated). But only the random one succeeded to
  commission, but foo.maas failed commission with timeout.

  If this happens to an already used VM, we may loose the original VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/maas/+bug/1716328/+subscriptions

Revision history for this message
Nobuto Murata (nobuto) wrote :

> It seems something changed since the bug was reporter.
> Specifying an existing hostname (or an invalid one) just makes maas generate a petname, so the new machine is created without error.

Just to clarify, the second VM succeeds by pushing out the first VM. Which is the core issue here, IMO.

[From the bug description]
In MAAS info, there are two VMs defined as foo.maas and rapid-swine.maas (randomly generated). But only the random one succeeded to commission, but foo.maas failed commission with timeout.

> Should we raise an error in those cases instead?

Yes. That would be ideal with something like "the hostname already exists".

Revision history for this message
Alberto Donato (ack) wrote :

>Just to clarify, the second VM succeeds by pushing out the first VM. Which is the core issue here, IMO.

This is not what I see (hence I think something changed). The first vm is created with hostname "foo" while the second one is created with a random petname. Both show up in MAAS and in virsh list

Revision history for this message
Andres Rodriguez (andreserl) wrote :

I was able to reproduce. I composed 2 machines, 2 machines were created in MAAS with 'foo' and '<petname>'.

ubuntu@maas00:~$ maas admin pod compose 15 cores=1 memory=2000 storage=default:10 hostname=foo
Success.
Machine-readable output follows:
{
    "system_id": "watbmk",
    "resource_uri": "/MAAS/api/2.0/machines/watbmk/"
}

ubuntu@maas00:~$ maas admin pod compose 15 cores=1 memory=2000 storage=default:10 hostname=foo
Success.
Machine-readable output follows:
{
    "resource_uri": "/MAAS/api/2.0/machines/w4ypga/",
    "system_id": "w4ypga"
}

However, when I check the pod itself, only 1 is there:

ubuntu@node01:~$ virsh list --all
 Id Name State
----------------------------------------------------
 9 foo running

Revision history for this message
Alberto Donato (ack) wrote :

Nevermind my last comment, I was able to reproduce too

Changed in maas:
milestone: 2.3.0beta4 → 2.3.0rc1
Alberto Donato (ack)
Changed in maas:
status: Triaged → In Progress
Changed in maas:
status: In Progress → Fix Committed
Changed in maas:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.