vm fails to boot due to conflicting network configuration when user switches from netplan to eni

Bug #1832381 reported by Anh Vo (MSFT)
22
This bug affects 4 people
Affects Status Importance Assigned to Milestone
cloud-init (Ubuntu)
Triaged
Medium
Unassigned

Bug Description

When the user provisions a bionic VM the first time, cloudinit picks netplan as the renderer since other renderers are not available (e.g., eni). Cloudinit then wrote a netplan file (50-cloud-init.yaml). If the user installs ifupdown and fail to manually remove netplan, the vm will fail to boot if there are conflicting changes happened to the NIC, such as a MAC address change. The reason is that when ifupdown is installed cloudinit will prefer eni at boot time and the previously written netplan configuration file now contains bad network configuration.

This was discussed in cloud-init IRC channel and the general consensus is that while the user should be removing netplan when installing ifupdown, it is not a hard requirement as netplan and ifupdown can actually coexist. Cloudinit should consider removing conflicting netplan file that it wrote when it chooses eni.

Revision history for this message
Ryan Harper (raharper) wrote :

Do you happen to have a cloud-init collect-logs output ? I suspect it's easy enough to recreate bug if you've already got logs available, please attach.

summary: - vm fails to boot due to conflicting network configuration when cloudinit
+ vm fails to boot due to conflicting network configuration when user
switches from netplan to eni
Changed in cloud-init (Ubuntu):
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for cloud-init (Ubuntu) because there has been no activity for 60 days.]

Changed in cloud-init (Ubuntu):
status: Incomplete → Expired
Ryan Harper (raharper)
Changed in cloud-init (Ubuntu):
importance: Undecided → Medium
status: Expired → Triaged
Revision history for this message
Thorsten Meinl (sithmein) wrote :

In my opinion this is serious bug. I created an AWS AMI using packer and one of the installed packages (salt-minion) pulled in ifupdown. This then broke network connectivity of instances started from the custom AMI. I took me five hours to find out what the problem was.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.