manila.exception.ShareBackendException: json_command failed - [errno -110] error calling ceph_mount

Bug #2012317 reported by Dominik Bender
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Manila-Ganesha Charm
Invalid
Undecided
Unassigned

Bug Description

After fresh deployment of cephfs, manila and manila-genesha on jammy/zed stable all charms comes up active whitout errors.

Create nfs share fails with error. Please look manila.log attachment (from manila-ganesha-sc2/leader).

Steps:

manila create --share-type cephfsnfstype --name testshareWow nfs 1
...
manila show 8ef9d21e-3fbd-483f-a0f3-06092b001d1e
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| id | 8ef9d21e-3fbd-483f-a0f3-06092b001d1e |
| size | 1 |
| availability_zone | nova |
| created_at | 2023-03-21T00:28:40.237006 |
| status | error |
| name | testshareWow |
| description | None |
| project_id | ec101c020cd94de3bdc7ef514021e0e2 |
| snapshot_id | None |
| share_network_id | None |
| share_proto | NFS |
| metadata | {} |
| share_type | 0f2f23df-ae8d-497f-813a-228b77b1151a |
| is_public | False |
| snapshot_support | False |
| task_state | None |
| share_type_name | cephfsnfstype |
| access_rules_status | active |
| replication_type | None |
| has_replicas | False |
| user_id | a6c7ceede9af466089380c141428d439 |
| create_share_from_snapshot_support | False |
| revert_to_snapshot_support | False |
| share_group_id | None |
| source_share_group_snapshot_member_id | None |
| mount_snapshot_support | False |
| progress | None |
| is_soft_deleted | False |
| scheduled_to_be_deleted_at | None |
| share_server_id | None |
| host | 10.110.100.202@cephfsnfs1#cephfs |
| export_locations | [] |
+---------------------------------------+--------------------------------------+

----

Controller Version: 2.9.31
Series: jammy
Openstack-Origin/Source: cloud:jammy-zed

manila 15.1.0 active 3 manila zed/stable 92 no Unit is ready
manila-hacluster active 3 hacluster 2.4/stable 117 no Unit is ready and clustered
manila-mysql-router 8.0.32 active 3 mysql-router 8.0/stable 35 no Unit is ready

manila-ganesha-sc2 17.2.0 active 3 manila-ganesha zed/stable 72 no Unit is ready
manila-ganesha-sc2-hacluster active 3 hacluster 2.4/stable 117 no Unit is ready and clustered
manila-ganesha-sc2-mysql-router 8.0.32 active 3 mysql-router 8.0/stable 35 no Unit is ready

...
  manila:
    charm: manila
    channel: zed/stable
    revision: 92
    series: jammy
    num_units: 3
    to:
    - lxd:0
    - lxd:1
    - lxd:2
    options:
      default-share-backend: cephfsnfs1
      default-share-type: cephfsnfstype
      openstack-origin: cloud:jammy-zed
      region: de1
      share-protocols: NFS CIFS CEPHFS
      use-internal-endpoints: true
      vip: 10.104.100.200 10.105.100.200 10.110.100.200
      vip_cidr: 16
    constraints: arch=amd64
    bindings:
      "": internal-c1
      admin: admin-c1
      amqp: internal-c1
      certificates: internal-c1
      cluster: internal-c1
      ha: internal-c1
      identity-service: internal-c1
      internal: internal-c1
      manila-plugin: internal-c1
      neutron-plugin: internal-c1
      nrpe-external-master: internal-c1
      public: service-c1
      remote-manila-plugin: internal-c1
      shared-db: internal-c1
  manila-ganesha-sc2:
    charm: manila-ganesha
    channel: zed/stable
    revision: 72
    series: jammy
    num_units: 3
    to:
    - lxd:42
    - lxd:43
    - lxd:44
    options:
      openstack-origin: cloud:jammy-zed
      use-internal-endpoints: true
      vip: 10.104.100.202 10.105.100.202 10.106.100.202 10.110.100.202
      vip_cidr: 16
    constraints: arch=amd64
    bindings:
      "": internal-c1
      admin: admin-c1
      amqp: internal-c1
      ceph: storage-c1
      certificates: internal-c1
      cluster: internal-c1
      ha: internal-c1
      identity-service: internal-c1
      internal: internal-c1
      manila-plugin: internal-c1
      nrpe-external-master: internal-c1
      public: service-c1
      shared-db: internal-c1
      tenant-storage: service-c1
...

Revision history for this message
Dominik Bender (ephermeral) wrote :
Revision history for this message
Dominik Bender (ephermeral) wrote :
description: updated
Revision history for this message
Dominik Bender (ephermeral) wrote (last edit ):

I retried it with another ganesha backend (new ceph-cluster) with this post deployment steps:

manila type-create nfs-sc1 false
manila type-key nfs-sc1 set share_backend_name=CEPHFSNFS1 storage_protocol=NFS
manila create --share-type nfs-sc1 --name sc1-test nfs 1

Charms all active and ready. Same error in manila-share.log on manila-ganesha-sc1/leader:

"2023-03-22 19:30:59.439 1394649 ERROR oslo_messaging.rpc.server [None req-61dc9d69-9b9c-458e-ae7c-6bd53fef840f a6c7ceede9af466089380c141428d439 ec101c020cd94de3bdc7ef514021e0e2 - - - -] Exception during message handling: manila.exception.ShareBackendException: json_command failed - prefix=fs subvolume create, argdict={'vol_name': 'ceph-fs-sc1', 'sub_name': '4b07464c-e4b4-4d98-aa02-172a42871972', 'size': 1073741824, 'namespace_isolated': True, 'mode': '755', 'format': 'json'} - exception message: [errno -110] error calling ceph_mount."

Revision history for this message
Dominik Bender (ephermeral) wrote :
Revision history for this message
Dominik Bender (ephermeral) wrote (last edit ):

In debug mode i see the following entries before the error occurs (10 min before) :

2023-03-23 18:10:17.477 9608 DEBUG manila.share.drivers.cephfs.driver [None req-7f7da6ee-76a1-46bf-bd5d-fbf6946615c9 a6c7ceede9af466089380c141428d439 ec101c020cd94de3bdc7ef514021e0e2 - - - -] [CEPHFSNFS1]: create_share: id=6760109e-065c-4583-aa22-87c764af7a23, size=1, group=None. create_share /usr/lib/python3/dist-packages/manila/share/drivers/cephfs/driver.py:457

2023-03-23 18:10:17.478 9608 DEBUG manila.share.drivers.cephfs.driver [None req-7f7da6ee-76a1-46bf-bd5d-fbf6946615c9 a6c7ceede9af466089380c141428d439 ec101c020cd94de3bdc7ef514021e0e2 - - - -] Invoking ceph_argparse.json_command - rados_client=<rados.Rados object at 0x7f6af5c2f940>, target=('mon-mgr',), prefix='fs subvolume create', argdict={'vol_name': 'ceph-fs-sc1', 'sub_name': '6760109e-065c-4583-aa22-87c764af7a23', 'size': 1073741824, 'namespace_isolated': True, 'mode': '755', 'format': 'json'}, inbuf=b'', timeout=10. rados_command /usr/lib/python3/dist-packages/manila/share/drivers/cephfs/driver.py:18

Is the target=('mon-mgr',) correct?

Revision history for this message
Dominik Bender (ephermeral) wrote :

/var/log/ceph/ceph-client.manila-ganesha-sc1.log ist empty.

Client on mon is correct. Keyrings matches. Manila-Ganesha lxd's can reach the mon ips.

ceph auth list
---
client.manila-ganesha-sc1
        key: ***
        caps: [mds] allow *
        caps: [mgr] allow *
        caps: [mon] allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"
        caps: [osd] allow rw

Revision history for this message
Dominik Bender (ephermeral) wrote :

The problem was a wrong binding:

ceph-fs-sc2:
    bindings:
      ...
      public: internal-c1

ceph-mon-sc2:
    bindings:
      ...
      public: storage-c1
      ...

The ceph-fs charm should be on public on the same space as ceph-mon: public: storage-c1

Changed in charm-manila-ganesha:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.