mgr commands reports EACCESS error

Bug #1884847 reported by Radhakrishnan
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Ceph Monitor Charm
New
Undecided
Unassigned

Bug Description

We recently did some network changes, after which PG's recovered except for one. I'm trying to troubleshoot and find the exact reason, when I execute the following command from the active mon node,
# ceph pg ls incomplete
I receive the below error
Error EACCES: access denied: does your client key have mgr caps? See http://docs.ceph.com/docs/master/mgr/administrator/#client-authentication

Below is the output of mgr from #ceph auth ls
mgr.usashaz1acsmon01
        key: AQCQHFhenMBFFBAAe6pYxb8lQ3U2qNzSfg560w==
        caps: [mds] allow *
        caps: [mon] allow profile mgr
        caps: [osd] allow *
mgr.usashaz1acsmon02
        key: AQCPHFhepDvqERAAZliPBPnxfehgjhWHHJEPyA==
        caps: [mds] allow *
        caps: [mon] allow profile mgr
        caps: [osd] allow *
mgr.usashaz1acsmon03
        key: AQClHFhePxXKOBAAwpEJ8dbovShkQa5NrWc1oQ==
        caps: [mds] allow *
        caps: [mon] allow profile mgr
        caps: [osd] allow *
Even if i try do a # ceph telemetry show --> I receive the same error as above
OSD related commands like osd tree works fine

I doubt if the caps against mon should also be * instead of profile mgr.

Here are the details about the environment,

Ceph version : 14.2.4
Openstack : Queens
Ceph Stack - 3 Mon Nodes / 4 Rados Gateway Nodes / 18 OSD nodes
Juju version : 2.8.0-bionic-amd64

Regards
Radhakrishnan Sethuraman

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

It looks like this is not a ceph-mon charm bug but a configuration bug with Ceph from the Cloud Archive? Could you clarify how you've deployed Ceph?

Changed in charm-ceph-mon:
status: New → Incomplete
Revision history for this message
Radhakrishnan (rkay111982) wrote :
Download full text (4.8 KiB)

Ceph was deployed using the yaml file using juju. Below is the content related to Ceph on the yaml file.

  # CEPH configuration

  osd-devices: &osd-devices >-
    /dev/disk/by-dname/bcache0
    /dev/disk/by-dname/bcache1
    /dev/disk/by-dname/bcache2
    /dev/disk/by-dname/bcache3
    /dev/disk/by-dname/bcache4
    /dev/disk/by-dname/bcache5
    /dev/disk/by-dname/bcache6
    /dev/disk/by-dname/bcache7
    /dev/disk/by-dname/bcache8
    /dev/disk/by-dname/bcache9
    /dev/disk/by-dname/bcache10
    /dev/disk/by-dname/bcache11

  osd-config-flags: &osd-config-flags >
                                                {
                                                    osd: {
                                                        # enable discard as bluestore has to manage it
                                                        # instead of os doing it on a file system
                                                        # see https://github.com/ceph/ceph/pull/14727
                                                        bdev_enable_discard: true,
                                                        bdev_async_discard: true
                                                    }
                                                }
  customize-failure-domain: &customize-failure-domain False

  # Expected OSD count is total number of OSD disks that will be part of Ceph cluster.
  # Never set this number higher or much lower than the real number. 10-20% less than
  # actual number is acceptable
  #expected-osd-count: &expected-osd-count 450
  expected-osd-count: &expected-osd-count 192
  expected-mon-count: &expected-mon-count 3

  # CEPH access network
  ceph-public-space: &ceph-public-space ceph-access-space

  # CEPH replication network
  ceph-cluster-space: &ceph-cluster-space ceph-cluster-space
  overlay-space: &overlay-space overlay-space

  # Workaround for 'only one default binding supported'
  oam-space-constr: &oam-space-constr spaces=oam-space
  ceph-access-constr: &ceph-access-constr spaces=ceph-access-space
  combi-access-constr: &combi-access-constr spaces=ceph-access-space,oam-space
  cinder:
    charm: cs:cinder-300
    num_units: 3
    constraints: *combi-access-constr
    bindings:
      "": *oam-space
      public: *public-space
      admin: *admin-space
      internal: *internal-space
      shared-db: *internal-space
      ceph: *ceph-public-space
    options:
      worker-multiplier: *worker-multiplier
      openstack-origin: *openstack-origin
      block-device: None
      glance-api-version: 2
      vip: *cinder-vip
      use-internal-endpoints: True
      region: *openstack-region
      config-flags: 'default_volume_type=cinder-ceph-ssd'
    to:
    - lxd:101
    - lxd:102
    - lxd:103
  cinder-ceph:
    charm: cs:cinder-ceph-253
    options:
      restrict-ceph-pools: False
  cinder-ceph-ssd:
    charm: cs:cinder-ceph-253
    options:
      restrict-ceph-pools: False
  cinder-ceph-nvme:
    charm: cs:cinder-ceph-253
    options:
      restrict-ceph-pools: False
  ceph-mon:
    charm: cs:ceph-mon-45
    num_units: 3
    bindings:
      "": *oam-space
      public: *ceph-public-...

Read more...

Revision history for this message
Andrey Grebennikov (agrebennikov) wrote :

Chris this is a boosted version of a Queens deployment.
In order to be able to try CephFS we upgraded to the Nautilus as Ceph is resided on separate nodes, originally it was deployed with Luminous.
Deployed with ceph-mon-45 and ceph-osd-298 charms.

Changed in charm-ceph-mon:
status: Incomplete → New
Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

Can you include the relations to ceph-mon, as the only thing I know of that touches the admin keyring's permissions is the ceph-admin relation

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.