Cloudinit resets network scripts to default configuration DHCP once Config Drive is removed after sometime
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
cloud-init |
Expired
|
Medium
|
Unassigned |
Bug Description
Cloudinit version: 19.1
Platform: OpenStack based.
OS: RHEL, SUSE
We use config drive(/dev/sr0) as a data source to configure network interfaces in the guest but configdrive is not always available and may be removed after couple of hours from the hypervisor.
On a first boot cloudinit uses data provided in config drive and updates system level network scripts /etc/sysconfig/
As long as the configdrive is available, reboots will relay on system scripts for the configuring network but once configdrive is removed, datasource becomes None meaning it neither system script nor config drive
which makes cloud init to configure default network which is DHCP
description: | updated |
Ryan Harper (raharper) wrote : | #1 |
Ryan Harper (raharper) wrote : | #2 |
Changed in cloud-init: | |
status: | New → Incomplete |
Divya K Konoor (dikonoor) wrote : | #3 |
cloud-init used here is not from RHEL or SUSE. As a next step, we will try validate if this behavior can be reproduced with cloud-init shipped with RHEL or SLES and see if this can be reproduced there.
vijayendra radhakrishna (vradhakrishna) wrote : | #4 |
- cloudinit log Edit (33.5 KiB, application/x-tar)
We have tested the mentioned behavior on cloudint 19.4(RHEL provided cloudinit) on RHEL8.2 guest VM
Observing the same behaviors as explained. Attached the cloudinit log for your reference.
Please let me know if any other details are required.
Thanks.
Scott Moser (smoser) wrote : | #5 |
I'm pretty sure cloud-init is working as designed here.
Please see
* https:/
* https:/
for more information.
Divya K Konoor (dikonoor) wrote : | #6 |
While we look at the links provided by Scott above
(https:/
at
https:/
def apply_network_
if ((self.datasource is NULL_DATA_SOURCE) or (
return
if self.datasource:
try:
if ((self.
except BaseException:
if (isinstance(
Eduardo Otubo (otubo) wrote : | #7 |
From RHEL and Openstack point of view, we never remove the config-drive and that's the reason we never faced this issue. Keeping the config-drive attached is an option for you, Divya?
Also, if you could paste a diff-style code it would be much easier to review, or even better a github link with the colored diff.
Eduardo Otubo (otubo) wrote : | #8 |
Just found out other similar bugs related to this behavior and this is fixed downstream only. The bug wasn't exactly on cloud-init, but looks like we were not including ds-identify on the rpm, causing the issue. Please give it a try on any RHEL shipped rpms >= cloud-init-18.5-5.*
Divya K Konoor (dikonoor) wrote : | #9 |
Edurado, ok. I think we have tried it with the RHEL cloud-init 8.5 that was getting shipped with RHEL8 and could reproduce is but we will try it and get back.
We did take a look at why we have a need to remove the data source and due to some reasons specific to our platform/
Ryan Harper (raharper) wrote : | #10 |
> In a case there is no datasource, cloud-init should not go and reset NW to dhcp.
If there is no datasource, cloud-init does not run. If you've removed your datasource, cloud-init should no longer activate during boot. In your case, you've removed your datasource so cloud-init attempts to look for one. On Power architecture, cloud-init cannot detect if it's running on Openstack without trying to contact the metadata service due to limitations in the Nova/PowerVM.
https:/
cloud-init then attempts to contact OpenStack over the network, it does this by making a best guess on which network interface to bring up, run DHCP and attempt to contact the metadata service.
Command: ['/var/
Exit code: 2
Reason: -
Stdout:
Stderr: Internet Systems Consortium DHCP Client 4.3.6
Copyright 2004-2017 Internet Systems Consortium.
All rights reserved.
For info, please visit https:/
Listening on LPF/env32/
Sending on LPF/env32/
Sending on Socket/fallback
Created duid "\000\004\
No DHCPOFFERS received.
Unable to obtain a lease on first try. Exiting.
Since cloud-init cannot reach the OpenStack metadata service, it does not have an instance-id so it will assume this boot is a first boot. On first boot, without network configuration from Openstack cloud-init will render a fallback network config, dhcp on best guess interface, which is why you see a change in network configuration.
The core issues are:
1) OpenStack Nova on Power does not have a way to indicate to the guest that it's running on Openstack; on x86 and arm, this is done via smbios/dmi tables; there is not yet an implementation for Power
2) When you remove your provided datasource (/dev/srX) you've removed the metadata that indicated cloud-init has already been booted.
As Scott mentioned, to address (2), you can look at configuring the manual-cache-clean option which will configure cloud-init in way as to keep existing metadata changes in place until the user manually cleans out cloud-init metadata.
Eduardo Otubo (otubo) wrote : | #11 |
@Ryan, I have a question about:
> If there is no datasource, cloud-init does not run. If you've removed your datasource, cloud-init
If cloud-init doesn't run, it shouldn't look for datasources, is that correct?
> should no longer activate during boot. In your case, you've removed your datasource so cloud-init
> attempts to look for one. On Power architecture, cloud-init cannot detect if it's running on
> Openstack without trying to contact the metadata service due to limitations in the Nova/PowerVM.
Ryan Harper (raharper) wrote : | #12 |
@Eduardo
> @Ryan, I have a question about:
>
> > If there is no datasource, cloud-init does not run. If you've removed your datasource, cloud-init
>
> If cloud-init doesn't run, it shouldn't look for datasources, is that correct?
cloud-init won't run if ds-identify does not detect a datasource.
On each boot cloud-init's systemd generator calls ds-identify. This program runs to determine whether cloud-init should run or not. The ds-identify default policy is to search, and report that cloud-init should run if:
1) it finds a datasource
2) if there might be a datasource
For (1); cloud-init examines specific values on the system, files in a
directory, values in system UUID, etc... these types of checks are binary;
we either have the correct value or we don't.
For (2); in some cases, ds-identify cannot be 100% sure the datasource isn't
present because the platform on which we're running does not provide us with
the needed data.
For POWER systems which do not export DMI values, the OpenStack Datasource
will always return maybe as the only way to know is to bring up networking and
query the OpenStack metadata service.
Contrast this with x86 platform where OpenStack VMs export values in the DMI
table. ds-identify can check the value of the dmi product name and know for
sure whether it's running on OpenStack or not.
ds-identify then writes out its conclusions in /run/cloud-
and enables the cloud-init.target which will run the 4 stages of cloud-init.
For this bug, on first boot, /dev/sr1 included an OpenStack ConfigDrive, so
ds-identify reports:
# cat /run/cloud-
datasource_list: [ ConfigDrive, None ]
The configdrive payload includes a network configuration which was applied.
After some time, the iso in /dev/sr1 was removed and the node in question
was rebooted. On the next boot when ds-identify runs and does not find
ConfigDrive in /dev/sr1. When checking for OpenStack datasources ds-identify
will report 'maybe' since the arch is not x86. This results in the
following cloud.cfg
# cat /run/cloud-
datasource_list: [ OpenStack, None ]
Now, cloud-init OpenStack datasource starts, and because the system currently
lacks the *original* ConfigDrive datasource, including the instance-id, this
looks like a *brand new boot*; and cloud-init will then bring up ephemeral
networking DHCP, attempt to see if there is an OpenStack metadata server
on the network (there is not, see the logs); and it then proceeds to using
DataSourceNone which is really a fallback datasource which tries it's best
to be useful but ultimately isn't what folks really need.
vijayendra radhakrishna (vradhakrishna) wrote : | #13 |
I have explored a bit on manual_cache_clean: true option suggested, Here are some of the findings.
1. Some of the capture use case has broken as this requires this manual cleaup on VMs as per documentation mentioned here.
https:/
2. In case if you have very large number of VMs this becomes overhead to do manual cleanup for all capture use cases
3. It also has security concerns when ssh-key rotation doesn't happen for capture use case where cleanup has not performed
vijayendra radhakrishna (vradhakrishna) wrote : | #14 |
In the above explanation, if I am not mistaken, if we remove Datasource(
I am seeing cloud.cfg is intact with [ ConfgDrive, None ]. with this in my opinion dscheck_OpenStack() shouldn't be called at all. when no data source cloudinit should get disabled itself but here why do we see dhcp configuration which is fallback
# cat /run/cloud-
datasource_list: [ OpenStack, None ]
Ryan Harper (raharper) wrote : | #15 |
> In the above explanation, if I am not mistaken, if we remove
> Datasource(
> configuration with openstack set right? which doesn't seems to be case here.
>
> I am seeing cloud.cfg is intact with [ ConfgDrive, None ].
You're saying you've observed the above where you remove the ISO, reboot the
node, and after a reboot, the contents of /run/cloud-
ConfigDrive ?
Can you provide the contents of /run/cloud-init/* and /var/log/
for this scenario?
Could it be either the ISO was not removed, or this may have come from one of the
instances where you were testing manual_cache_clean?
> with this in my opinion dscheck_OpenStack() shouldn't be called at all. when
ds-identify is called every boot. The config files in /run/cloud-init are
ephemeral and are thrown away each boot (/run is tmpfs mount).
> no data source cloudinit should get disabled itself but here why do we see
> dhcp configuration which is fallback
Recall that since you're not on x86, the OpenStack check cannot return False
as the *only* way to know for sure that there isn't an OpenStack metadata
service on the network is to try it (or have the image configured to not
check OpenStack)
vijayendra radhakrishna (vradhakrishna) wrote : | #16 |
- cloudinit-log when we explicitly specify ConfigDrive is the datasource Edit (242.9 KiB, application/x-tar)
@Ryan,
Yes, Above is when we explicitly specify ConfigDrive is the datasource in the /etc/cloud/
datasource_list: [ ConfigDrive, None ]. In this case also we are observing the cloudinit setting fallback network in /etc/sysconfig/
Attached is the cloud-init collec-logs
Ryan Harper (raharper) wrote : | #17 |
> Yes, Above is when we explicitly specify ConfigDrive is the datasource in
> the /etc/cloud/
> case also we are observing the cloudinit setting fallback network in
> /etc/sysconfig/
Yes; when you remove the ConfigDrive cloud-init no longer knows it has a
datasource, though you *told* cloud-init via the hard-coded
/etc/cloud/
ISO attached to the instance, cloud-init cannot tell it is booting into the
same instance as it was last time.
When booting without the iso, cloud-init tries to find it, but fails and
without the datasource and continues to boot trying to do something useful for
the later stages of boot. In the absense of a Datasource to provide
cloud-init with the network-
network config, which is to DHCP on an interface.
From your logs (thanks!),
All of the boots where /dev/sr0 has the Config drive, we see cloud-init search
for a ConfigDrive and find it on /dev/sr0, like so
2020-05-07 06:41:11,105 - util.py[DEBUG]: Cloud-init v. 19.1 running 'init-local' at Thu, 07 May 2020 06:41:11 +0000. Up 3356.48 seconds.
...
2020-05-07 06:41:11,159 - __init__.py[DEBUG]: Looking for data source in: ['ConfigDrive', 'None'], via packages ['', 'cloudinit.
2020-05-07 06:41:11,162 - __init__.py[DEBUG]: Searching for local data source in: ['DataSourceCon
2020-05-07 06:41:11,163 - handlers.py[DEBUG]: start: init-local/
2020-05-07 06:41:11,163 - __init__.py[DEBUG]: Seeing if we can get any data from <class 'cloudinit.
2020-05-07 06:41:11,163 - __init__.py[DEBUG]: Update datasource metadata and network config due to events: New instance first boot
2020-05-07 06:41:11,163 - util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/sr0'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,174 - util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/sr1'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,177 - util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/cd0'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,180 - util.py[DEBUG]: Running command ['blkid', '-odevice', '/dev/cd1'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,183 - util.py[DEBUG]: Running command ['blkid', '-tTYPE=vfat', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,199 - util.py[DEBUG]: Running command ['blkid', '-tTYPE=iso9660', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,208 - util.py[DEBUG]: Running command ['blkid', '-tLABEL=config-2', '-odevice'] with allowed return codes [0, 2] (shell=False, capture=True)
2020-05-07 06:41:11,217 - util.py[DEBUG]: Running command ['blkid', '-tLABEL=CONFIG-2', '-odevice'] with allowed return codes [0, 2] (shell=False, c...
vijayendra radhakrishna (vradhakrishna) wrote : | #18 |
@Ryan,
All though I need to take a look at your suggestion about PR 229: Here is few more findings with ds-identify tool, Let me if this can be accepted if I raise a PR against ds-identity tool itself.
Currently ds-identify returns DS_FOUND on subsequent boot even though config drive (/dev/sr0) is removed. I believe this shouldn't happen (let me know if you think otherwise), Here I am trying to fix this particular behavior for power hardware only. Lets say if do something like this in ds-identify
1. Detect its power hardware and hypervisor is powerVM
2. check if /dev/sr0(
with this I hope cloudinit will not configure fallback(dhcp) on powerVM
Thanks
Ryan Harper (raharper) wrote : | #19 |
> All though I need to take a look at your suggestion about PR
> 229: Here is few more findings with ds-identify tool, Let me if
> this can be accepted if I raise a PR against ds-identity tool
> itself.
>
> Currently ds-identify returns DS_FOUND on subsequent boot even
> though config drive (/dev/sr0) is removed. I believe this
> shouldn't happen (let me know if you think otherwise), Here I
ds-identify will already do this. If you remove your hardcoded
datasource list from /etc/cloud/
Note that by default (on Ubuntu at least), the datasource_list
is populated with all *potential* datasources:
root@g1:~# grep datasource_list /etc/cloud/
root@g1:~# cat /etc/cloud/
# to update this file, run dpkg-reconfigure cloud-init
datasource_list: [ NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, Exoscale, RbxCloud, None ]
When ds-identify runs, it *reads* /etc/cloud/
/etc/cloud/
datasource_list.
By default, this is, as you see, a long list. For each of these
potential datasources, ds-identify attempts to determine if
the datasource is present.
In your case where you've provided a datasource_list value
with *one* datasource (ds-identify ignores None) then
it does not do any detection at all; the image has "told"
ds-identify which datasource to use.
Look at your ds-identify.log file you provided:
[up 17.85s] ds-identify
policy loaded: mode=search report=false found=all maybe=all notfound=enabled
/etc/cloud/
WARN: No dmidecode program. Cannot read sys_vendor.
WARN: No dmidecode program. Cannot read chassis_asset_tag.
WARN: No dmidecode program. Cannot read product_name.
WARN: No dmidecode program. Cannot read product_serial.
WARN: No dmidecode program. Cannot read product_uuid.
DMI_PRODUCT_
DMI_SYS_
DMI_PRODUCT_
DMI_PRODUCT_
PID_1_PRODUCT_
DMI_CHASSIS_
FS_LABELS=
ISO9660_DEVS=
KERNEL_
VIRT=none
UNAME_KERNEL_
UNAME_KERNEL_
UNAME_KERNEL_
UNAME_MACHINE=
UNAME_NODENAME=
UNAME_OPERATING
DSNAME=
DSLIST=ConfigDrive None
MODE=search
ON_FOUND=all
ON_MAYBE=all
ON_NOTFOUND=enabled
pid=810 ppid=787
is_container=false
single entry in datasource_list (ConfigDrive None) use that.
[up 18.17s] returning 0
> am trying to fix this particular behavior for power hardware
> only. Lets say if do something like this in ds-identify
>
> 1. Detect its power hardware and hypervisor is powerVM
> 2. check if /dev/sr0(
> return DS_NOTFOUND
>
> with this I hope cloudinit will not configure fallback(dhcp) on
> powerVM
The current behavior for single-datasource could...
Scott Moser (smoser) wrote : | #20 |
> The current behavior for single-datasource could be changed to
> *check* if the single datasource is present.
> I think this would address your case. As a quick test for you
> if you update your datasource_list to include one more datasource
> that you know isn't present, like NoCloud, then ds-identify won't
> exit early and will attempt to see if ConfigDrive or NoCloud are
> present; they won't be; and cloud-init would stay disabled.
It would address this use case, but would break per-boot behavior,
as cloud-init would be disabled.... it would not run any per-boot
functionality.
vijayendra radhakrishna (vradhakrishna) wrote : | #21 |
- logs attached Edit (268.2 KiB, application/x-tar)
@Ryan, I tried your suggestion on adding one more datasource NoCloud along with ConfigDrive but I still ended up with cloudinit resetting to fallback(dhcp).
JFYI, I am running on RHEL env.
Ryan Harper (raharper) wrote : | #22 |
> > The current behavior for single-datasource could be changed to
> > *check* if the single datasource is present.
>
> > I think this would address your case. As a quick test for you
> > if you update your datasource_list to include one more datasource
> > that you know isn't present, like NoCloud, then ds-identify won't
> > exit early and will attempt to see if ConfigDrive or NoCloud are
> > present; they won't be; and cloud-init would stay disabled.
>
> It would address this use case, but would break per-boot behavior,
> as cloud-init would be disabled.... it would not run any per-boot
> functionality.
Yes; you're quite right. Thanks for pointing that part out.
Ryan Harper (raharper) wrote : | #23 |
@vijayendra
The reason you're getting that behavior is that the ds-identify policy is to enable cloud-init if it does not find anything:
[up 12.54s] ds-identify
policy loaded: mode=search report=false found=all maybe=all notfound=enable
...
No ds found [mode=search, notfound=enabled]. Enabled cloud-init [0]
[up 12.78s] returning 0
The default policy is notfound=disable. Is something changing the default ds-identify policy?
vijayendra radhakrishna (vradhakrishna) wrote : | #24 |
As per Ryan harper suggestion with some config change as below, we may not hit fallback(dhcp) network reset once config drive is removed.
Change 1:
Disable cloudinit when no ds is found:
config file: /etc/cloud/
policy: search,
Change 2:
config file: /etc/cloud/
we may also have to add one more non existing data source like NoCloud to avoid cloudinit early exist
datasource_list: [ ConfigDrive, NoCloud, None ]
Above is just a short term as this will break per boot also once configDrive is removed.
Currently re working on one of the PR: https:/
vijayendra radhakrishna (vradhakrishna) wrote : | #25 |
@Ryan Harper
Created below PR as suggested.
https:/
vijayendra radhakrishna (vradhakrishna) wrote : | #26 |
- attached cloud-init log for PR: #647 Edit (50.5 KiB, application/x-tar)
attached cloud-init log for PR: #647
Launchpad Janitor (janitor) wrote : | #27 |
[Expired for cloud-init because there has been no activity for 60 days.]
Changed in cloud-init: | |
status: | Incomplete → Expired |
Changed in cloud-init: | |
status: | Expired → In Progress |
importance: | Undecided → Medium |
James Falcon (falcojr) wrote : | #28 |
Tracked in Github Issues as https:/
Changed in cloud-init: | |
status: | In Progress → Expired |
Hello,
Thanks for filing a bug.
The config-drive provided to an instance includes metadata that provides and instance-id. If the config-drive is removed, cloud-init should not longer be active during subsequent boots. It sounds like cloud-init is not installed correctly.
Are you using official cloud-images from RHEL and SUSE? Have you manually enabled the cloud-init service files? Cloud-init will disable itself if there are not datasources detected.
Can you run cloud-init collect-logs and attach the generated tarball?