lvm on hypervisors

Bug #1692555 reported by Filippo DiNoto
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack-Ansible
Expired
Undecided
Unassigned

Bug Description

Guest LVM disks attached to the host via iSCSI are seen by the host. This can cause Ansible fact gathering to hang, presumably because multiple mount volumes exist with the same names or too many potential boot sources are found. This also causes apt/dpkg issues when updating grub related packages.

Initially I thought the best way to handle this would be to simply remove lvm2 packages from the hypervisors. However after removing lvm2 from the list of host packages installed by OSA I found that it is still being installed as a dependancy for something else. Probably one of the guestfs packages.

Adding:

filter = [ "r/.*/" ]

to lvm.conf disables lvm on all disks. But it seems that lvm configuration has been removed from base host playbooks and is only modified in cinder playbooks. I think support for modifying this setting should be included in the nova plays.

Revision history for this message
Andy McCrae (andrew-mccrae) wrote :

Hi Filippo, do you mean the ability to edit the filters for your lvm.conf?
https://github.com/openstack/openstack-ansible-os_cinder/blob/master/templates/lvm.conf.j2#L31

We should ignore any that are already used_lvm_devices based on that, but we could allow an override i imagine? Is that what you would be looking for.

Changed in openstack-ansible:
status: New → Incomplete
Revision history for this message
Filippo DiNoto (fdinoto) wrote :

I'm not sure what the goal of the lvm.conf.j2 is.

I need to prevent any disks attached to the host from being seen by lvm. That doesn't happen by default. I am having trouble understanding how ansible facts are going to provide anything relevant to the desired result.

Comment on line 17:
{# If there are no LVM devices present, allow all devices to be scanned #}

Why would anyone want all devices to be scanned? These are hypervisors. Nobody wants hypervisors to see LVM data on volumes attached to guests do they?

I can see a case for specifying some disks that are intended to be used by the hypervisor but any other disks should be ignored. But even in this case, the user is going to have to specify the UUID of those disks. There is no way ansible is going to be able to determine that.

Revision history for this message
Filippo DiNoto (fdinoto) wrote :

Also I want to point out that the mentioned template is part of the os_cinder role. This issue relates to compute hosts that are skipped by the cinder plays.

Revision history for this message
Andy McCrae (andrew-mccrae) wrote :

Ahh I see, so it's on the compute host that isn't a cinder volumes host.

I think libguetsfs0 installs lvm2 as a dependency.
We could manage filters line inside the lvm.conf, although I imagine we'd have to not do that if it's a compute host and a cinder host, which then becomes tricky.

So if you add that filter line, from the initial bug report, to lvm.conf on the compute hosts, that resolves the issue with the ansible facts gathering?

It sounds like there is a bugged device there, for the fact gathering to fail - can you narrow it down to which device it's hanging on?

My concern is that if we manage the lvm.conf - 1. it becomes complicated because you could overlap and cause lvm.conf managed by cinder to be overwritten, or vice versa (resulting in unexpected conf options and results), and 2. the lvm configuration isn't actually the root cause of the issue.

Revision history for this message
Filippo DiNoto (fdinoto) wrote :

My opinion on the matter is that the current approach is too opinionated.

I do have a single host that is both a nova-compute and a cinder-volume host. But the cinder-volume service is not using the LVM driver. So I personally have little concern with ensuring OSA can deploy cinder-volume with the LVM driver.

I view cinder-volume with LVM driver as a lab/test solution. I could be wrong about that, maybe people use it in production. nova-compute attaching to iSCSI cinder volumes, regardless of the volume driver should be a higher priority.

I'd classify this as a security issue too. I've seen compute hosts detect a guest's swap logical volume and attach it. OSA isn't doing anything to prevent that. So this gives a guest the opportunity to write to the host's memory.

Revision history for this message
Logan V (loganv) wrote :

Even on your host where the nova-compute + cinder-volume are collocated, I think the os_cinder LVM setup stuff won't bite us there because the template should only be dropped if the LVM backend is in use (conditional here: https://github.com/openstack/openstack-ansible-os_cinder/blob/9549dd0a91395b976a1c518e0f77d4e70b9a7aa6/tasks/main.yml#L69-L71)

Assuming the above is true, I think that means the LVM setup is pretty much completely unopinionated in your env. Some package is pulling in the LVM package as a dependency, or OSA is dropping LVM mistakenly when it doesn't actually need it, but regardless the lvm.conf should be vanilla on those hosts, right? As in, OSA is not the compute host LVM config at all. If that is true then I think it would be something you could simply manage in your environment with the settings appropriate for your host and expect OSA not to clobber them since it should not take any interest in your LVM config if Openstack is not configured to use LVM.

Revision history for this message
Logan V (loganv) wrote :

"As in, OSA is not the compute host LVM config at all." -> "As in, OSA is not managing the compute host LVM config at all. "

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for openstack-ansible because there has been no activity for 60 days.]

Changed in openstack-ansible:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.