Ephemeral disk not mounted when new instance requires reformat of the volume

Bug #1823100 reported by Jason Zions
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
cloud-init
New
Undecided
Unassigned

Bug Description

Each Azure VM is provided with an ephemeral disk, and the cloud-init configuration supplied in the VM image requests the volume be mounted under /mnt. Each new ephemeral disk is formatted for NTFS rather than ext4 or another Linux filesystem. The Azure datasource detects this (in .activate()) and makes sure the disk_setup and mounts modules run. The disk_setup module formats the volume; the mounts module sees that the ephemeral volume is configured to be mounted and it adds the appropriate entry to /etc/fstab. After updating fstab, the mounts volume invokes the "mount -a" command to mount (or unmount) volumes according to fstab. That's how it all works during the initial provisioning of a new VM.

When a VM gets rehosted for any reason (service heal, stop/deallocate and restart), the ephemeral drive provided to the previous instance is lost. A new ephemeral volume is supplied, also formatted ntfs. When the VM is booted, systemd's mnt.mount unit runs and complains about the unmountable ntfs volume that's still in /etc/fstab. The disk_setup module properly formats the volume. However, the mounts module sees the volume is *already* in fstab, sees that it didn't change anything, so it doesn't run "mount -a". The net result: the volume doesn't get mounted.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.