Terrible user experience adding existing LXD host
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Fix Released
|
High
|
Alberto Donato | ||
maas-ui |
Fix Released
|
Unknown
|
Bug Description
I wanted to test the LXD VM host feature again, so I added a KVM host through MAAS.
It properly connected to the system and promptly imported all my existing VMs (why???).
It then marked any VM that it didn't already know about as Commissioning which got me greatly concerned as those are pre-existing production VMs that I absolutely don't want MAAS to go and reboot right now.
I resorted to blocking network access from MAAS to that LXD host to avoid any additional issue with this. Then when deleting the KVM host, the UI made me feel like it may want to go and delete all my existing VMs as well. Obviously it couldn't and it may never have tried doing that (I don't know).
One last issue is that as mentioned some VMs on that host were already individually added to MAAS as machines. Looks like MAAS somehow merged that information and so when removing the LXD host from MAAS, it promptly deleted 12 of my machines which I now have to go and manually re-add, commission and re-configure.
This whole behavior seems completely unacceptable to me. Either MAAS can deal with existing VMs and add them as running, properly handling the case where some of the VM names may already be defined in MAAS, or it can only deal with an empty LXD host and should immediately refuse to connect should anything already exist there.
The MAAS behavior when deleting a LXD host from MAAS should also be clarified so that users are not left wondering if MAAS is about to go on a deleting spree potentially causing massive data loss.
Obviously all of this would be made significantly easier and safer if MAAS would support LXD projects since it could then just always require an empty project avoiding this entire mess.
Just to be clear, the behavior I would have expected there would be one of:
- MAAS to ignore all existing VMs and just let me create more new VMs
- MAAS to tell me that there are existing VMs and allow me to pick those that I want to see imported as machines
- MAAS to tell me that it can't deal with LXD hosts that have existing VMs
Changed in maas: | |
status: | New → Triaged |
importance: | Undecided → High |
Changed in maas-ui: | |
status: | Unknown → New |
Changed in maas: | |
status: | Triaged → In Progress |
assignee: | nobody → Alberto Donato (ack) |
milestone: | none → 2.10-next |
Changed in maas: | |
status: | In Progress → Fix Committed |
Changed in maas: | |
milestone: | 3.0.0 → 3.0-beta1 |
Changed in maas: | |
status: | Fix Committed → Fix Released |
Changed in maas-ui: | |
status: | New → Fix Released |
When Pods were initially implemented in MAAS for Intel RSD and libvirt the decision was made that MAAS will have full control. No non-MAAS use is expected or supported. As such when a Pod is added all existing machines are commissioned, power cycled, and shown as a new machine. When a Pod is deleted it should delete all machines associated with the Pod as they are MAAS machines.
Our goal last cycle for LXD was feature parity with Intel RSD and libvirt. We didn't change any of the behavior for how Pods work in MAAS, just extended support for LXD. In the past when users have requested multitenant Pods we've instructed them to use containers or virtual machines for individual use cases with KVM pass through.
I agree with this bug report that MAAS should handle multiuse Pods better, just trying to explain why things are the way they are.