[OVH manual provider] ceph-osd requires mlocate package but does not install it

Bug #1759145 reported by Calvin Hartwell
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ceph OSD Charm
Triaged
Low
Unassigned

Bug Description

Hi all,

The ceph-osd charm requires the mlocate package to be installed on Ubuntu but does not do this by default, which throws the following error (https://gist.github.com/CalvinHartwell/63b652c4286195659749588a12a2f39c):

unit-ceph-osd-2: 09:32:16 INFO unit.ceph-osd/2.juju-log Installing apparmor profile for ceph-osd
unit-ceph-osd-2: 09:32:17 INFO unit.ceph-osd/2.juju-log Installing apparmor utils.
unit-kubernetes-worker-0: 09:32:17 DEBUG unit.kubernetes-worker/0.update-status Error from server (NotFound): nodes "FROVHWORKERK8SDEV-N04" not found
unit-kubernetes-worker-0: 09:32:17 INFO unit.kubernetes-worker/0.juju-log Failed to apply label juju-application=kubernetes-worker. Will retry.
unit-kubernetes-worker-0: 09:32:18 DEBUG unit.kubernetes-worker/0.update-status Error from server (NotFound): nodes "FROVHWORKERK8SDEV-N04" not found
unit-ceph-osd-2: 09:32:18 INFO unit.ceph-osd/2.juju-log Setting up the apparmor profile for usr.bin.ceph-osd in disable mode.
unit-kubernetes-worker-0: 09:32:18 INFO unit.kubernetes-worker/0.juju-log Failed to apply label juju-application=kubernetes-worker. Will retry.
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed Disabling /etc/apparmor.d/usr.bin.ceph-osd.
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed ERROR: /sbin/apparmor_parser: Unable to remove "/usr/bin/ceph-osd". Profile doesn't exist
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed
unit-ceph-osd-2: 09:32:19 INFO unit.ceph-osd/2.juju-log Manually disabling the apparmor profile for usr.bin.ceph-osd.
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed apparmor.service is not active, cannot reload.
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed Traceback (most recent call last):
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/config-changed", line 559, in <module>
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed hooks.execute(sys.argv)
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/charmhelpers/core/hookenv.py", line 800, in execute
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed self._hooks[hook_name]()
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/charmhelpers/contrib/hardening/harden.py", line 79, in _harden_inner2
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed return f(*args, **kwargs)
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/config-changed", line 367, in config_changed
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed add_to_updatedb_prunepath(STORAGE_MOUNT_PATH)
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/charmhelpers/core/host.py", line 975, in add_to_updatedb_prunepath
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed with open(updatedb_path, 'r+') as f_id:
unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed FileNotFoundError: [Errno 2] No such file or directory: '/etc/updatedb.conf'

Installing the mlocate package fixes this issue, I.E sudo apt-get install mlocate. This should be automated as part of the charm.

Thanks

Revision history for this message
Calvin Hartwell (calvinh) wrote :

Currently on customer-site, quick turn around appreciated.

Fix for now:

juju run --all -- sudo apt-get install mlocate

Revision history for this message
Nobuto Murata (nobuto) wrote :

Which Juju provider are you on?

mlocate is in "standard" package set with xenial.

$ apt-cache show mlocate | grep Task:
Task: standard

Also in MAAS images by default:
http://images.maas.io/ephemeral-v3/daily/xenial/amd64/20180323/squashfs.manifest

Changed in charm-ceph-osd:
status: New → Incomplete
Revision history for this message
Nobuto Murata (nobuto) wrote :
Revision history for this message
Chris Procter (chrisp262) wrote :

We're using the manual provider with 16.04 images supplied by the cloud hosting company (OVH).

Revision history for this message
Nobuto Murata (nobuto) wrote :

Then it sounds like it's your responsibility to make sure expected package set is installed before adding nodes for manual provider. Something like
apt-get install minimal^ standard^
and so on.

Changed in charm-ceph-osd:
status: Incomplete → New
summary: - ceph-osd requires mlocate package but does not install it
+ [OVH manual provider] ceph-osd requires mlocate package but does not
+ install it
Revision history for this message
Ryan Beisner (1chb1n) wrote :

A deeper question would be: how and what should charm authors expect to have to declare as dependencies, when the image in use is not a standard cloud image?

Should the charms declare essentially the full set of packages in a standard cloud image, to be thorough?

That might be a large hammer, but it seems that this could be any package, and it could affect any charm.

Revision history for this message
Chris Procter (chrisp262) wrote : Re: [Bug 1759145] Re: [OVH manual provider] ceph-osd requires mlocate package but does not install it
Download full text (5.1 KiB)

it doesn't seem unreasonable to expect that if a charm uses a tool it
ensures it is installed beforehand.

The alternative is that end users are left trawling through logs for random
error messages in the form of uncaught python exceptions and trying to
figure out the intersect between the tools the charm writers decided to use
and what their cloud provider decided to remove for their own undisclosed
reasons.

Essentially at the moment we put the load on our customers when the
developers already have this knowledge, and in the process we expand our
range of supported platforms from the narrow "known good" providers to any
box with ubuntu installed.

On 27 March 2018 at 20:40, Ryan Beisner <email address hidden> wrote:

> A deeper question would be: how and what should charm authors expect to
> have to declare as dependencies, when the image in use is not a standard
> cloud image?
>
> Should the charms declare essentially the full set of packages in a
> standard cloud image, to be thorough?
>
> That might be a large hammer, but it seems that this could be any
> package, and it could affect any charm.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1759145
>
> Title:
> [OVH manual provider] ceph-osd requires mlocate package but does not
> install it
>
> Status in OpenStack ceph-osd charm:
> New
>
> Bug description:
> Hi all,
>
> The ceph-osd charm requires the mlocate package to be installed on
> Ubuntu but does not do this by default, which throws the following
> error
> (https://gist.github.com/CalvinHartwell/63b652c4286195659749588a12a2f39c
> ):
>
> unit-ceph-osd-2: 09:32:16 INFO unit.ceph-osd/2.juju-log Installing
> apparmor profile for ceph-osd
> unit-ceph-osd-2: 09:32:17 INFO unit.ceph-osd/2.juju-log Installing
> apparmor utils.
> unit-kubernetes-worker-0: 09:32:17 DEBUG unit.kubernetes-worker/0.update-status
> Error from server (NotFound): nodes "FROVHWORKERK8SDEV-N04" not found
> unit-kubernetes-worker-0: 09:32:17 INFO unit.kubernetes-worker/0.juju-log
> Failed to apply label juju-application=kubernetes-worker. Will retry.
> unit-kubernetes-worker-0: 09:32:18 DEBUG unit.kubernetes-worker/0.update-status
> Error from server (NotFound): nodes "FROVHWORKERK8SDEV-N04" not found
> unit-ceph-osd-2: 09:32:18 INFO unit.ceph-osd/2.juju-log Setting up the
> apparmor profile for usr.bin.ceph-osd in disable mode.
> unit-kubernetes-worker-0: 09:32:18 INFO unit.kubernetes-worker/0.juju-log
> Failed to apply label juju-application=kubernetes-worker. Will retry.
> unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed Disabling
> /etc/apparmor.d/usr.bin.ceph-osd.
> unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed
> unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed ERROR:
> /sbin/apparmor_parser: Unable to remove "/usr/bin/ceph-osd". Profile
> doesn't exist
> unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.config-changed
> unit-ceph-osd-2: 09:32:19 INFO unit.ceph-osd/2.juju-log Manually
> disabling the apparmor profile for usr.bin.ceph-osd.
> unit-ceph-osd-2: 09:32:19 DEBUG unit.ceph-osd/2.con...

Read more...

Revision history for this message
Alex Kavanagh (ajkavanagh) wrote :

The mlocate usage is in hardening in charmhelpers (this is on the ceph-osd charm).

hooks/charmhelpers/contrib/hardening/host/checks/suid_sgid.py
57: '/usr/bin/mlocate',

Realistically, charmhelpers ought to ensure that it is installed before using it; i.e. a charm could use the hardening feature and then it would randomly blow up at some other point.

I'll also add a bug/reference to this in charmhelpers git issues:
https://github.com/juju/charm-helpers/issues/179

Changed in charm-ceph-osd:
status: New → Triaged
importance: Undecided → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.