[2.x] Power Settings for virsh are not per node, they are global

Bug #1656091 reported by Ryan McAdams
50
This bug affects 7 people
Affects Status Importance Assigned to Milestone
MAAS
Won't Fix
Critical
Unassigned
2.2
Won't Fix
Critical
Unassigned

Bug Description

Pretty simple (what I believe to be bug):

If you have multiple KVM hosts running in your network and you boot machines off of them into MAAS, your settings for the power type "virsh" are global.

Example Configuration:

Host 1 - Running KVM
Host 2 - Running KVM

VM 1 - Guest running on Host 1
VM 2 - Guest running on Host 2

Reproduce the issue:

PXE Boot VM1 to MAAS and configure the virsh settings on it's node.

Test power controls etc... to ensure it's working appropriately.

PXE Boot VM2 to MAAS and configure the virsh settings on it's node.

Test power controls etc... to ensure it's working appropriately.

After you've done this with VM2, you'll no longer be able to control VM1, because the virsh host settings are now changed to represent VM2 (host2).

Work arounds tried:

I've tried putting the hosts/vm's in different zones etc... to no avvail.

tags: added: docteam
Revision history for this message
Ryan McAdams (ryanmcadams) wrote :

MAAS Version 2.1.1+bzr5544-0ubuntu1 (16.04.1)

Revision history for this message
Mike Pontillo (mpontillo) wrote :

I haven't seen this problem in my test environments, but it's possible that's because I use the "Add Hardware > Chassis" functionality to add the virtual machines, instead of manually adding them. (Use a prefix filter based on the VM name if you don't want to add all of them, such as if you name your MAAS nodes "maas1" and "maas2", you could use this feature with a prefix filter of "maas" to enlist both of those nodes.)

Can you let me know if that is a workaround for you?

Also, while I'm not aware of any changes to MAAS affecting this issue, I recommend that you update to the latest MAAS patch from xenial-proposed (2.1.3) and check if the issue still occurs.

Revision history for this message
Mike Pontillo (mpontillo) wrote :

On second thought, I can't say I haven't had this problem. (I haven't tried running MAAS on two KVM hosts at once.)

I'll see if someone can check into this.

Revision history for this message
Ryan McAdams (ryanmcadams) wrote :

I can confirm this has happened in two different environments with Xenial and with Yakkety, I havent tested it with Zesty... would Yakkety include the proposed updates?

Revision history for this message
Mike Pontillo (mpontillo) wrote :

I'm not sure since I tend to stick to Xenial (since that's where customers typically run MAAS). I would use `apt-cache policy maas` to find out.

Revision history for this message
Ryan McAdams (ryanmcadams) wrote :

Still happens even with xenial proposed. My guess sis that this is expected behavior (given attachment), but it's not functionally feasible.

Revision history for this message
Mike Pontillo (mpontillo) wrote :

Well, I think it's still a valid bug. Because we should still be able to differentiate them based on their URL. We do this for most power drivers by means of the IP address, but virtual machine based power drivers are intended primarily for demo and testing purposes, so that is probably why it's broken.

Revision history for this message
Ryan McAdams (ryanmcadams) wrote :

Makes sense, keep in mind a lot of labs (like ours) will run on the VM's... a lot of dev work will be done against them etc... so it'll grow in importance as MAAS gains adoption.

Ryan Beisner (1chb1n)
tags: added: uosci
Revision history for this message
Peter Matulis (petermatulis) wrote :

I've experienced this before as well. I recall that changing the virsh settings for one node would break other nodes (as they could no longer be contacted since their settings changed automatically). However, there is definitely some flakiness here because I currently have 4 nodes configured for one KVM host and 1 node configured for a second KVM host. It could have something to do with the web UI. I configured all my nodes via the API. Now, when I go in the web UI to change a node in the 4-group I am told that 3 other nodes will be affected.

Revision history for this message
Ryan McAdams (ryanmcadams) wrote :

Peter -

Correct - I found a work around, but I dont believe it to be an effective way to use MAAS. So if you add a 'chassis' without PXE booting the KVM VM's it will automatically add all of them, with the appropriate settings and save those settings. However, every time that you add a new VM to the host, you'll have to use the 'add chassis' menu option to bring in the new KVM VM's.

Mike -

I've been doing a lot of tinkering and testing with MAAS and to me this is a glaring deficiency at this point. We use KVM VM's in production for our OpenStack clusters for things like OpenStack Directors, Infrastructure VM's and/or SDN controllers. We architect the setup over a few physical servers to run our clusters so it seems important for even production systems, not just demo and testing.

Changed in maas:
importance: Undecided → Wishlist
milestone: none → 2.3.0
milestone: 2.3.0 → none
importance: Wishlist → Undecided
Revision history for this message
Andres Rodriguez (andreserl) wrote :

I believe this is a more widespread problem. It doesn't only happen on virsh but other power types:

Digital Loggers Inc. PDU - https://bugs.launchpad.net/maas/+bug/1667633
BMC's duplication - https://bugs.launchpad.net/maas/+bug/1676808

Changed in maas:
status: New → Triaged
importance: Undecided → High
milestone: none → 2.3.0
summary: - Power Settings for virsh are not per node, they are global
+ [2.x] Power Settings for virsh are not per node, they are global
Changed in maas:
importance: High → Critical
Revision history for this message
deaunapaul (waxmigs2902) wrote : Re: [Bug 1656091] [NEW] Power Settings for virsh are not per node, they are global

On Wednesday, 28 June 2017, Andres Rodriguez <email address hidden>
wrote:

> I believe this is a more widespread problem. It doesn't only happen on
> virsh but other power types:
>
> Digital Loggers Inc. PDU - https://bugs.launchpad.net/maas/+bug/1667633
> BMC's duplication - https://bugs.launchpad.net/maas/+bug/1676808
>
> ** Changed in: maas
> Status: New => Triaged
>
> ** Changed in: maas
> Importance: Undecided => High
>
> ** Changed in: maas
> Milestone: None => 2.3.0
>
> ** Also affects: maas/2.2
> Importance: Undecided
> Status: New
>
> ** Changed in: maas/2.2
> Milestone: None => 2.2.x
>
> ** Changed in: maas/2.2
> Importance: Undecided => High
>
> ** Changed in: maas/2.2
> Status: New => Triaged
>
> ** Summary changed:
>
> - Power Settings for virsh are not per node, they are global
> + [2.x] Power Settings for virsh are not per node, they are global
>
> ** Changed in: maas
> Importance: High => Critical
>
> ** Changed in: maas/2.2
> Importance: High => Critical
>
> --
> You received this bug notification because you are subscribed to MAAS.
> Matching subscriptions: maas, root-servers, waxmiguel
> https://bugs.launchpad.net/bugs/1656091
>
> Title:
> [2.x] Power Settings for virsh are not per node, they are global
>
> Status in MAAS:
> Triaged
> Status in MAAS 2.2 series:
> Triaged
>
> Bug description:
> Pretty simple (what I believe to be bug):
>
> If you have multiple KVM hosts running in your network and you boot
> machines off of them into MAAS, your settings for the power type
> "virsh" are global.
>
> Example Configuration:
>
> Host 1 - Running KVM
> Host 2 - Running KVM
>
> VM 1 - Guest running on Host 1
> VM 2 - Guest running on Host 2
>
> Reproduce the issue:
>
> PXE Boot VM1 to MAAS and configure the virsh settings on it's node.
>
> Test power controls etc... to ensure it's working appropriately.
>
> PXE Boot VM2 to MAAS and configure the virsh settings on it's node.
>
> Test power controls etc... to ensure it's working appropriately.
>
> After you've done this with VM2, you'll no longer be able to control
> VM1, because the virsh host settings are now changed to represent VM2
> (host2).
>
> Work arounds tried:
>
> I've tried putting the hosts/vm's in different zones etc... to no
> avvail.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/maas/+bug/1656091/+subscriptions
>

--
Waxmiguel

tags: added: cpec
Changed in maas:
milestone: 2.3.0 → 2.3.0beta2
Ante Karamatić (ivoks)
tags: added: cpe-onsite
removed: cpec
tags: added: internal
Revision history for this message
Björn Tillenius (bjornt) wrote :

Can someone please confirm this bug still exists in 2.3beta1? And if so, please provide detailed instructions on how to reproduce it. I followed the steps described in the bug description, using the web UI to edit the power settings. I was able to power manage VMs on both hosts without any problems.

Changed in maas:
status: Triaged → Incomplete
Revision history for this message
Björn Tillenius (bjornt) wrote :

I still can't reproduce this bug using the steps in the bug description. The duplicate bugs talk about another scenario, though, when you have two VMs on the same host and want to change one of them to be on another host (either because you migrated the VM or specified the wrong IP by mistake).

That situation is not a simple bug to fix. It's more of an UX issue and will require some thinking of how to fix it properly.

A workaround would be to set the node's power type to Manual, and then set it it virsh again with the right settings.

Changed in maas:
milestone: 2.3.0beta2 → 2.3.0beta3
Changed in maas:
milestone: 2.3.0beta3 → 2.3.0beta4
Changed in maas:
milestone: 2.3.0beta4 → 2.3.0rc1
milestone: 2.3.0rc1 → 2.3.x
Revision history for this message
Andres Rodriguez (andreserl) wrote :

Dear user,

This is an automated message.

We believe this bug report is no longer an issue in the latest version of MAAS. For such reason, we are making this issue as Won't Fix. If you believe this issue is still present in the latest version of MAAS, please re-open this bug report.

Changed in maas:
status: Incomplete → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.