Cloudinit resets network scripts to default configuration DHCP once Config Drive is removed after sometime

Bug #1893770 reported by vijayendra radhakrishna
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
cloud-init
Expired
Medium
Unassigned

Bug Description

Cloudinit version: 19.1
Platform: OpenStack based.
OS: RHEL, SUSE

We use config drive(/dev/sr0) as a data source to configure network interfaces in the guest but configdrive is not always available and may be removed after couple of hours from the hypervisor.

On a first boot cloudinit uses data provided in config drive and updates system level network scripts /etc/sysconfig/ifcfg-* (Static configuration of networks) files and also configures the interface in the guest.

As long as the configdrive is available, reboots will relay on system scripts for the configuring network but once configdrive is removed, datasource becomes None meaning it neither system script nor config drive

which makes cloud init to configure default network which is DHCP

description: updated
Revision history for this message
Ryan Harper (raharper) wrote :

Hello,
Thanks for filing a bug.

The config-drive provided to an instance includes metadata that provides and instance-id. If the config-drive is removed, cloud-init should not longer be active during subsequent boots. It sounds like cloud-init is not installed correctly.

Are you using official cloud-images from RHEL and SUSE? Have you manually enabled the cloud-init service files? Cloud-init will disable itself if there are not datasources detected.

Can you run cloud-init collect-logs and attach the generated tarball?

Revision history for this message
Ryan Harper (raharper) wrote :
Changed in cloud-init:
status: New → Incomplete
Revision history for this message
Divya K Konoor (dikonoor) wrote :

cloud-init used here is not from RHEL or SUSE. As a next step, we will try validate if this behavior can be reproduced with cloud-init shipped with RHEL or SLES and see if this can be reproduced there.

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

We have tested the mentioned behavior on cloudint 19.4(RHEL provided cloudinit) on RHEL8.2 guest VM

Observing the same behaviors as explained. Attached the cloudinit log for your reference.

Please let me know if any other details are required.

Thanks.

Revision history for this message
Scott Moser (smoser) wrote :

I'm pretty sure cloud-init is working as designed here.
Please see
 * https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1888858
 * https://github.com/canonical/cloud-init/pull/568

for more information.

Revision history for this message
Divya K Konoor (dikonoor) wrote :

While we look at the links provided by Scott above
(https://bugs.launchpad.net/cloud-init/+bug/1712680 from inside one of the above links has a lot of data), the current downstream patch that we are using to get past this issue is return from the below function is datasrc is found as None. This has worked for us so far.

at
https://github.com/canonical/cloud-init/blob/stable-19.4/cloudinit/stages.py#L678

def apply_network_config(self, bring_up):

      if ((self.datasource is NULL_DATA_SOURCE) or (
                self.datasource is None)):
            LOG.info("Data source is None. Skipping network config")
            return

        if self.datasource:
            try:
                if ((self.datasource.dsname == "None") or (
                        self.datasource.dsname is "None") or (
                            self.datasource.dsname is None)):
                    LOG.info(
                        "Data source is an instance of DataSourceNone. "
                        "Skipping network config")
                    return
            except BaseException:
                LOG.info("in except block")
                if (isinstance(self.datasource, dsnone.DataSourceNone)):
                    LOG.info(
                        "Data source is an instance of DataSourceNone. "
                        "Skipping network config")
                    return

Revision history for this message
Eduardo Otubo (otubo) wrote :

From RHEL and Openstack point of view, we never remove the config-drive and that's the reason we never faced this issue. Keeping the config-drive attached is an option for you, Divya?

Also, if you could paste a diff-style code it would be much easier to review, or even better a github link with the colored diff.

Revision history for this message
Eduardo Otubo (otubo) wrote :

Just found out other similar bugs related to this behavior and this is fixed downstream only. The bug wasn't exactly on cloud-init, but looks like we were not including ds-identify on the rpm, causing the issue. Please give it a try on any RHEL shipped rpms >= cloud-init-18.5-5.*

Revision history for this message
Divya K Konoor (dikonoor) wrote :

Edurado, ok. I think we have tried it with the RHEL cloud-init 8.5 that was getting shipped with RHEL8 and could reproduce is but we will try it and get back.

We did take a look at why we have a need to remove the data source and due to some reasons specific to our platform/environment, we have a need to remove the datasource. I believe cloud-init should accomodate both cases- with and without datasource. In a case there is no datasource, cloud-init should not go and reset NW to dhcp.

Revision history for this message
Ryan Harper (raharper) wrote :

> In a case there is no datasource, cloud-init should not go and reset NW to dhcp.

If there is no datasource, cloud-init does not run. If you've removed your datasource, cloud-init should no longer activate during boot. In your case, you've removed your datasource so cloud-init attempts to look for one. On Power architecture, cloud-init cannot detect if it's running on Openstack without trying to contact the metadata service due to limitations in the Nova/PowerVM.

 https://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html

cloud-init then attempts to contact OpenStack over the network, it does this by making a best guess on which network interface to bring up, run DHCP and attempt to contact the metadata service.

Command: ['/var/tmp/cloud-init/cloud-init-dhcp-2zup00qk/dhclient', '-1', '-v', '-lf', '/var/tmp/cloud-init/cloud-init-dhcp-2zup00qk/dhcp.leases', '-pf', '/var/tmp/cloud-init/cloud-init-dhcp-2zup00qk/dhclient.pid', 'env32', '-sf', '/bin/true']
Exit code: 2
Reason: -
Stdout:
Stderr: Internet Systems Consortium DHCP Client 4.3.6
        Copyright 2004-2017 Internet Systems Consortium.
        All rights reserved.
        For info, please visit https://www.isc.org/software/dhcp/

        Listening on LPF/env32/fa:c9:e6:d2:01:20
        Sending on LPF/env32/fa:c9:e6:d2:01:20
        Sending on Socket/fallback
        Created duid "\000\004\220\007\237\263\032[K\023\261\364}\206\256\275Jc".
        DHCPDISCOVER on env32 to 255.255.255.255 port 67 interval 4 (xid=0xde2eb608)
        DHCPDISCOVER on env32 to 255.255.255.255 port 67 interval 9 (xid=0xde2eb608)
        DHCPDISCOVER on env32 to 255.255.255.255 port 67 interval 18 (xid=0xde2eb608)
        DHCPDISCOVER on env32 to 255.255.255.255 port 67 interval 19 (xid=0xde2eb608)
        DHCPDISCOVER on env32 to 255.255.255.255 port 67 interval 11 (xid=0xde2eb608)
        No DHCPOFFERS received.
        Unable to obtain a lease on first try. Exiting.

Since cloud-init cannot reach the OpenStack metadata service, it does not have an instance-id so it will assume this boot is a first boot. On first boot, without network configuration from Openstack cloud-init will render a fallback network config, dhcp on best guess interface, which is why you see a change in network configuration.

The core issues are:

1) OpenStack Nova on Power does not have a way to indicate to the guest that it's running on Openstack; on x86 and arm, this is done via smbios/dmi tables; there is not yet an implementation for Power

2) When you remove your provided datasource (/dev/srX) you've removed the metadata that indicated cloud-init has already been booted.

As Scott mentioned, to address (2), you can look at configuring the manual-cache-clean option which will configure cloud-init in way as to keep existing metadata changes in place until the user manually cleans out cloud-init metadata.

Revision history for this message
Eduardo Otubo (otubo) wrote :

@Ryan, I have a question about:

> If there is no datasource, cloud-init does not run. If you've removed your datasource, cloud-init

If cloud-init doesn't run, it shouldn't look for datasources, is that correct?

> should no longer activate during boot. In your case, you've removed your datasource so cloud-init
> attempts to look for one. On Power architecture, cloud-init cannot detect if it's running on
> Openstack without trying to contact the metadata service due to limitations in the Nova/PowerVM.

Revision history for this message
Ryan Harper (raharper) wrote :

@Eduardo

> @Ryan, I have a question about:
>
> > If there is no datasource, cloud-init does not run. If you've removed your datasource, cloud-init
>
> If cloud-init doesn't run, it shouldn't look for datasources, is that correct?

cloud-init won't run if ds-identify does not detect a datasource.

On each boot cloud-init's systemd generator calls ds-identify. This program runs to determine whether cloud-init should run or not. The ds-identify default policy is to search, and report that cloud-init should run if:

1) it finds a datasource
2) if there might be a datasource

For (1); cloud-init examines specific values on the system, files in a
directory, values in system UUID, etc... these types of checks are binary;
we either have the correct value or we don't.

For (2); in some cases, ds-identify cannot be 100% sure the datasource isn't
present because the platform on which we're running does not provide us with
the needed data.

For POWER systems which do not export DMI values, the OpenStack Datasource
will always return maybe as the only way to know is to bring up networking and
query the OpenStack metadata service.

Contrast this with x86 platform where OpenStack VMs export values in the DMI
table. ds-identify can check the value of the dmi product name and know for
sure whether it's running on OpenStack or not.

ds-identify then writes out its conclusions in /run/cloud-init/cloud.cfg
and enables the cloud-init.target which will run the 4 stages of cloud-init.

For this bug, on first boot, /dev/sr1 included an OpenStack ConfigDrive, so
ds-identify reports:

# cat /run/cloud-init/cloud.cfg
datasource_list: [ ConfigDrive, None ]

The configdrive payload includes a network configuration which was applied.
After some time, the iso in /dev/sr1 was removed and the node in question
was rebooted. On the next boot when ds-identify runs and does not find
ConfigDrive in /dev/sr1. When checking for OpenStack datasources ds-identify
will report 'maybe' since the arch is not x86. This results in the
following cloud.cfg

# cat /run/cloud-init/cloud.cfg
datasource_list: [ OpenStack, None ]

Now, cloud-init OpenStack datasource starts, and because the system currently
lacks the *original* ConfigDrive datasource, including the instance-id, this
looks like a *brand new boot*; and cloud-init will then bring up ephemeral
networking DHCP, attempt to see if there is an OpenStack metadata server
on the network (there is not, see the logs); and it then proceeds to using
DataSourceNone which is really a fallback datasource which tries it's best
to be useful but ultimately isn't what folks really need.

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

I have explored a bit on manual_cache_clean: true option suggested, Here are some of the findings.

1. Some of the capture use case has broken as this requires this manual cleaup on VMs as per documentation mentioned here.
 https://cloudinit.readthedocs.io/en/latest/topics/boot.html#not-present

2. In case if you have very large number of VMs this becomes overhead to do manual cleanup for all capture use cases

3. It also has security concerns when ssh-key rotation doesn't happen for capture use case where cleanup has not performed

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

In the above explanation, if I am not mistaken, if we remove Datasource(/dev/sr0) and reboot we should end up with below cloud.cfg configuration with openstack set right? which doesn't seems to be case here.

I am seeing cloud.cfg is intact with [ ConfgDrive, None ]. with this in my opinion dscheck_OpenStack() shouldn't be called at all. when no data source cloudinit should get disabled itself but here why do we see dhcp configuration which is fallback

# cat /run/cloud-init/cloud.cfg
datasource_list: [ OpenStack, None ]

Revision history for this message
Ryan Harper (raharper) wrote :

> In the above explanation, if I am not mistaken, if we remove
> Datasource(/dev/sr0) and reboot we should end up with below cloud.cfg
> configuration with openstack set right? which doesn't seems to be case here.
>
> I am seeing cloud.cfg is intact with [ ConfgDrive, None ].

You're saying you've observed the above where you remove the ISO, reboot the
node, and after a reboot, the contents of /run/cloud-init/cloud.cfg shows
ConfigDrive ?

Can you provide the contents of /run/cloud-init/* and /var/log/cloud-init.log
for this scenario?

Could it be either the ISO was not removed, or this may have come from one of the
instances where you were testing manual_cache_clean?

> with this in my opinion dscheck_OpenStack() shouldn't be called at all. when

ds-identify is called every boot. The config files in /run/cloud-init are
ephemeral and are thrown away each boot (/run is tmpfs mount).

> no data source cloudinit should get disabled itself but here why do we see
> dhcp configuration which is fallback

Recall that since you're not on x86, the OpenStack check cannot return False
as the *only* way to know for sure that there isn't an OpenStack metadata
service on the network is to try it (or have the image configured to not
check OpenStack)

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

@Ryan,

Yes, Above is when we explicitly specify ConfigDrive is the datasource in the /etc/cloud/cloud.cfg,
datasource_list: [ ConfigDrive, None ]. In this case also we are observing the cloudinit setting fallback network in /etc/sysconfig/network-scripts/ifcfg-* . Any reason for this?

Attached is the cloud-init collec-logs

Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (6.8 KiB)

> Yes, Above is when we explicitly specify ConfigDrive is the datasource in
> the /etc/cloud/cloud.cfg, datasource_list: [ ConfigDrive, None ]. In this
> case also we are observing the cloudinit setting fallback network in
> /etc/sysconfig/network-scripts/ifcfg-* . Any reason for this?

Yes; when you remove the ConfigDrive cloud-init no longer knows it has a
datasource, though you *told* cloud-init via the hard-coded
/etc/cloud/cloud.cfg file that there *would* be a ConfigDrive. With out the
ISO attached to the instance, cloud-init cannot tell it is booting into the
same instance as it was last time.

When booting without the iso, cloud-init tries to find it, but fails and
without the datasource and continues to boot trying to do something useful for
the later stages of boot. In the absense of a Datasource to provide
cloud-init with the network-configuration, cloud-init uses its fallback
network config, which is to DHCP on an interface.

From your logs (thanks!),

All of the boots where /dev/sr0 has the Config drive, we see cloud-init search
for a ConfigDrive and find it on /dev/sr0, like so

2020-05-07 06:41:11,105 - util.py[DEBUG]: Cloud-init v. 19.1 running 'init-local' at Thu, 07 May 2020 06:41:11 +0000. Up 3356.48 seconds.
...
2020-05-07 06:41:11,159 - __init__.py[DEBUG]: Looking for data source in: ['ConfigDrive', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM']
2020-05-07 06:41:11,162 - __init__.py[DEBUG]: Searching for local data source in: ['DataSourceConfigDrive']
2020-05-07 06:41:11,163 - handlers.py[DEBUG]: start: init-local/search-ConfigDrive: searching for local data from DataSourceConfigDrive
2020-05-07 06:41:11,163 - __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.sources.DataSourceConfigDrive.DataSourceConfigDrive'>
2020-05-07 06:41:11,163 - __init__.py[DEBUG]: Update datasource metadata and network config due to events: New instance first boot
2020-05-07 06:41:11,163 - util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/sr0'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,174 - util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/sr1'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,177 - util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/cd0'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,180 - util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/cd1'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,183 - util.py[DEBUG]: Running command ['blkid', '-tTYPE=vfat', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,199 - util.py[DEBUG]: Running command ['blkid', '-tTYPE=iso9660', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,208 - util.py[DEBUG]: Running command ['blkid', '-tLABEL=config-2', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,217 - util.py[DEBUG]: Running command ['blkid', '-tLABEL=CONFIG-2', '-odevice'] with allowed return codes [0, 2] (shell=False, c...

Read more...

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

@Ryan,

All though I need to take a look at your suggestion about PR 229: Here is few more findings with ds-identify tool, Let me if this can be accepted if I raise a PR against ds-identity tool itself.

Currently ds-identify returns DS_FOUND on subsequent boot even though config drive (/dev/sr0) is removed. I believe this shouldn't happen (let me know if you think otherwise), Here I am trying to fix this particular behavior for power hardware only. Lets say if do something like this in ds-identify

1. Detect its power hardware and hypervisor is powerVM
2. check if /dev/sr0(configdrive) is mountable or not if not return DS_NOTFOUND

with this I hope cloudinit will not configure fallback(dhcp) on powerVM

Thanks

Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (3.5 KiB)

> All though I need to take a look at your suggestion about PR
> 229: Here is few more findings with ds-identify tool, Let me if
> this can be accepted if I raise a PR against ds-identity tool
> itself.
>
> Currently ds-identify returns DS_FOUND on subsequent boot even
> though config drive (/dev/sr0) is removed. I believe this
> shouldn't happen (let me know if you think otherwise), Here I

ds-identify will already do this. If you remove your hardcoded
datasource list from /etc/cloud/cloud.cfg;

Note that by default (on Ubuntu at least), the datasource_list
is populated with all *potential* datasources:

root@g1:~# grep datasource_list /etc/cloud/cloud.cfg
root@g1:~# cat /etc/cloud/cloud.cfg.d/90_dpkg.cfg
# to update this file, run dpkg-reconfigure cloud-init
datasource_list: [ NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, Exoscale, RbxCloud, None ]

When ds-identify runs, it *reads* /etc/cloud/cloud.cfg and
/etc/cloud/cloud.cfg.d/*.cfg looking for the value of
datasource_list.

By default, this is, as you see, a long list. For each of these
potential datasources, ds-identify attempts to determine if
the datasource is present.

In your case where you've provided a datasource_list value
with *one* datasource (ds-identify ignores None) then
it does not do any detection at all; the image has "told"
ds-identify which datasource to use.

Look at your ds-identify.log file you provided:

[up 17.85s] ds-identify
policy loaded: mode=search report=false found=all maybe=all notfound=enabled
/etc/cloud/cloud.cfg set datasource_list: [ ConfigDrive, None ]
WARN: No dmidecode program. Cannot read sys_vendor.
WARN: No dmidecode program. Cannot read chassis_asset_tag.
WARN: No dmidecode program. Cannot read product_name.
WARN: No dmidecode program. Cannot read product_serial.
WARN: No dmidecode program. Cannot read product_uuid.
DMI_PRODUCT_NAME=error
DMI_SYS_VENDOR=error
DMI_PRODUCT_SERIAL=error
DMI_PRODUCT_UUID=error
PID_1_PRODUCT_NAME=unavailable
DMI_CHASSIS_ASSET_TAG=error
FS_LABELS=
ISO9660_DEVS=
KERNEL_CMDLINE=BOOT_IMAGE=/vmlinuz-4.18.0-193.el8.ppc64le root=/dev/mapper/rhel_p8--9--vios1-root ro crashkernel=auto rd.lvm.lv=rhel_p8-9-vios1/root rd.lvm.lv=rhel_p8-9-vios1/swap biosdevname=0
VIRT=none
UNAME_KERNEL_NAME=Linux
UNAME_KERNEL_RELEASE=4.18.0-193.el8.ppc64le
UNAME_KERNEL_VERSION=#1 SMP Fri Mar 27 14:40:12 UTC 2020
UNAME_MACHINE=ppc64le
UNAME_NODENAME=vijayendra10.pok.stglabs.ibm.com
UNAME_OPERATING_SYSTEM=GNU/Linux
DSNAME=
DSLIST=ConfigDrive None
MODE=search
ON_FOUND=all
ON_MAYBE=all
ON_NOTFOUND=enabled
pid=810 ppid=787
is_container=false
single entry in datasource_list (ConfigDrive None) use that.
[up 18.17s] returning 0

> am trying to fix this particular behavior for power hardware
> only. Lets say if do something like this in ds-identify
>
> 1. Detect its power hardware and hypervisor is powerVM
> 2. check if /dev/sr0(configdrive) is mountable or not if not
> return DS_NOTFOUND
>
> with this I hope cloudinit will not configure fallback(dhcp) on
> powerVM

The current behavior for single-datasource could...

Read more...

Revision history for this message
Scott Moser (smoser) wrote :

> The current behavior for single-datasource could be changed to
> *check* if the single datasource is present.

> I think this would address your case. As a quick test for you
> if you update your datasource_list to include one more datasource
> that you know isn't present, like NoCloud, then ds-identify won't
> exit early and will attempt to see if ConfigDrive or NoCloud are
> present; they won't be; and cloud-init would stay disabled.

It would address this use case, but would break per-boot behavior,
as cloud-init would be disabled.... it would not run any per-boot
functionality.

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

@Ryan, I tried your suggestion on adding one more datasource NoCloud along with ConfigDrive but I still ended up with cloudinit resetting to fallback(dhcp).

JFYI, I am running on RHEL env.

Revision history for this message
Ryan Harper (raharper) wrote :

> > The current behavior for single-datasource could be changed to
> > *check* if the single datasource is present.
>
> > I think this would address your case. As a quick test for you
> > if you update your datasource_list to include one more datasource
> > that you know isn't present, like NoCloud, then ds-identify won't
> > exit early and will attempt to see if ConfigDrive or NoCloud are
> > present; they won't be; and cloud-init would stay disabled.
>
> It would address this use case, but would break per-boot behavior,
> as cloud-init would be disabled.... it would not run any per-boot
> functionality.

Yes; you're quite right. Thanks for pointing that part out.

Revision history for this message
Ryan Harper (raharper) wrote :

@vijayendra

The reason you're getting that behavior is that the ds-identify policy is to enable cloud-init if it does not find anything:

[up 12.54s] ds-identify
policy loaded: mode=search report=false found=all maybe=all notfound=enable
...
No ds found [mode=search, notfound=enabled]. Enabled cloud-init [0]
[up 12.78s] returning 0

The default policy is notfound=disable. Is something changing the default ds-identify policy?

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

As per Ryan harper suggestion with some config change as below, we may not hit fallback(dhcp) network reset once config drive is removed.

Change 1:
Disable cloudinit when no ds is found:
config file: /etc/cloud/ds-identify.cfg
policy: search,found=all,maybe=all,notfound=disabled

Change 2:
config file: /etc/cloud/cloud.cfg
we may also have to add one more non existing data source like NoCloud to avoid cloudinit early exist
datasource_list: [ ConfigDrive, NoCloud, None ]

Above is just a short term as this will break per boot also once configDrive is removed.

Currently re working on one of the PR: https://github.com/canonical/cloud-init/pull/229 and testing in our power env.

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

@Ryan Harper

Created below PR as suggested.
https://github.com/canonical/cloud-init/pull/647

Revision history for this message
vijayendra radhakrishna (vradhakrishna) wrote :

attached cloud-init log for PR: #647

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for cloud-init because there has been no activity for 60 days.]

Changed in cloud-init:
status: Incomplete → Expired
Ryan Harper (raharper)
Changed in cloud-init:
status: Expired → In Progress
importance: Undecided → Medium
Revision history for this message
James Falcon (falcojr) wrote :
Changed in cloud-init:
status: In Progress → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.