ceph-osd failed : unable to read/decode monmap from /srv/storage/bcache-sdb/activate.monmap: (13) Permission denied with apparmor

Bug #1744443 reported by Nobuto Murata
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Ceph OSD Charm
Triaged
Low
Unassigned

Bug Description

/srv/storage/bcache-sdb is a pre-defined mount point set up by MAAS on xfs + bcache.

With apparmor enabled, looks like we have to mount it under /srv/ceph, but there is no mention to the path restriction in the charm config option. The description of osd-devices could be improved.

[osd-devices option]

      For ceph >= 0.56.6 these can also be directories instead of devices - the
      charm assumes anything not starting with /dev is a directory instead.

[/etc/apparmor.d/usr.bin.ceph-osd]
  /run/ceph/* rw,
  /srv/ceph/** rwk,
  /tmp/ r,
  /var/lib/ceph/** rwk,
  /var/lib/ceph/osd/** l,
  /var/lib/charm/*/ceph.conf r,
  /var/log/ceph/* rwk,
  /var/run/ceph/* rwk,
  /var/tmp/ r,

Jan 20 09:58:52 ucs-5a-block-3 kernel: [48696.998084] audit: type=1400 audit(1516442332.795:166): apparmor="DENIED" operation="open" profile="/usr/bin/ceph-osd" name="/srv/storage/bcache-sdb/activate.monmap" pid=514850 comm="ceph-osd" requested_mask="r" denied_mask="r" fsuid=64045 ouid=0

unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed ceph-disk: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'1', '--monmap', '/srv/storage/bcache-sdb$
activate.monmap', '--osd-data', '/srv/storage/bcache-sdb', '--osd-journal', '/srv/storage/bcache-sdb/journal', '--osd-uuid', u'88e61a5c-a318-4644-b394-3da09ed41ea2', '--keyring', '/srv/stora$
e/bcache-sdb/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : unable to read/decode monmap from /srv/storage/bcache-sdb/activate.monmap: (13) Permission denied
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed Traceback (most recent call last):
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-1/charm/hooks/mon-relation-changed", line 557, in <module>
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed hooks.execute(sys.argv)
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-1/charm/hooks/charmhelpers/core/hookenv.py", line 798, in execute
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed self._hooks[hook_name]()
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-1/charm/hooks/mon-relation-changed", line 484, in mon_relation
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed prepare_disks_and_activate()
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-1/charm/hooks/mon-relation-changed", line 391, in prepare_disks_and_activate
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed ceph.start_osds(get_devices())
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed File "lib/ceph/utils.py", line 1000, in start_osds
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed subprocess.check_call(['ceph-disk', 'activate', dev_or_path])
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed raise CalledProcessError(retcode, cmd)
unit-ceph-osd-1: 09:57:12 DEBUG unit.ceph-osd/1.mon-relation-changed subprocess.CalledProcessError: Command '['ceph-disk', 'activate', '/srv/storage/bcache-sdb']' returned non-zero exit stat$
s 1
unit-ceph-osd-1: 09:57:13 ERROR juju.worker.uniter.operation hook "mon-relation-changed" failed: exit status 1

Tags: cpe-onsite
Revision history for this message
Nobuto Murata (nobuto) wrote :

I didn't notice the restriction, since I somehow used /srv/ceph by nature. But this time I just wanted to use some generic names (/srv/storage) since some of the bcache mount points are used for Swift.

Revision history for this message
James Page (james-page) wrote :

Most configuration options assume you're not running with apparmor enforcing (as most of then pre-date the introduction of this feature).

We could look to make this 100% dynamic - however that would mean writing some sort of apparmor helper for ceph, which I think is outside the scope of what we should be doing in the charm.

If /srv/storage is somewhere we want to support, then lets add that to the apparmor profile; probably worth noting in the README this specific use case so that followers don't trip on using /srv/<insertrandomstringhere>.

Changed in charm-ceph-osd:
status: New → Triaged
importance: Undecided → Low
Revision history for this message
Nobuto Murata (nobuto) wrote :

Documentation update is more than sufficient, instead of code change. Will propose it when having spare cycles.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.