Create share action should not ask for a vip

Bug #2022064 reported by Natalia Litvinova
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ceph NFS Charm
Triaged
High
Unassigned

Bug Description

I'm running a small ceph cluster on AWS Jammy machines with one ceph-NFS unit on quincy/stable rev 5. Juju version is 3.2.0.

Running create share action results with:

$ juju run ceph-nfs/leader create-share name=test size=10 allowed-ips=10.0.0.0/24
Running operation 37 with 1 task
  - task 38 on unit-ceph-nfs-0

Waiting for task 38...
ERROR the following task failed:
 - id "38" with return code 1

use 'juju show-task' to inspect the failure

$ juju show-operation 37
summary: create-share run on ceph-nfs/leader
status: failed
action:
  name: create-share
  parameters:
    allowed-ips: 10.0.0.0/24
    name: test
    size: 10
timing:
  enqueued: 2023-06-01 16:13:48 +0400 +04
  started: 2023-06-01 16:13:49 +0400 +04
  completed: 2023-06-01 16:13:49 +0400 +04
tasks:
  "38":
    host: ceph-nfs/0
    status: failed
    timing:
      enqueued: 2023-06-01 16:13:48 +0400 +04
      started: 2023-06-01 16:13:49 +0400 +04
      completed: 2023-06-01 16:13:49 +0400 +04
    message: exit status 1
    results:
      return-code: 1

Inspecting /var/log/juju/unit-ceph-nfs-0.log shows the following:

2023-06-01 12:13:48 INFO unit.ceph-nfs/0.juju-log server.go:325 Using octopus class
2023-06-01 12:13:48 INFO unit.ceph-nfs/0.juju-log server.go:325 Reloading Ganesha after nonce triggered reload
2023-06-01 12:13:48 ERROR unit.ceph-nfs/0.juju-log server.go:325 Uncaught exception while in charm code:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/./src/charm.py", line 511, in <module>
    main(ops_openstack.core.get_charm_class_for_release())
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/venv/ops/main.py", line 438, in main
    _emit_charm_event(charm, dispatcher.event_name)
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/venv/ops/main.py", line 150, in _emit_charm_event
    event_to_emit.emit(*args, **kwargs)
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/venv/ops/framework.py", line 355, in emit
    framework._emit(event) # noqa
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/venv/ops/framework.py", line 856, in _emit
    self._reemit(event_path)
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/venv/ops/framework.py", line 931, in _reemit
    custom_handler(event)
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/./src/charm.py", line 434, in create_share_action
    "ip": self.access_address()})
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/./src/charm.py", line 412, in access_address
    return self._get_space_vip_mapping().get(
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/./src/charm.py", line 400, in _get_space_vip_mapping
    bindings[binding_name] = [
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/./src/charm.py", line 403, in <listcomp>
    for vip in self.vips
  File "/var/lib/juju/agents/unit-ceph-nfs-0/charm/./src/charm.py", line 395, in vips
    return self.config.get('vip').split()
AttributeError: 'NoneType' object has no attribute 'split'

My vip config is indeed empty since it's not a mandatory field.

Expected outcome: create-share action completes even without a vip.

$ juju status --relations
Model Controller Cloud/Region Version SLA Timestamp
ceph aws-controller aws/us-east-1 3.2.0 unsupported 16:23:17+04:00

App Version Status Scale Charm Channel Rev Exposed Message
ceph-dashboard active 3 ceph-dashboard quincy/stable 25 no Unit is ready
ceph-fs 17.2.5 active 1 ceph-fs quincy/stable 57 no Unit is ready
ceph-mon 17.2.5 active 3 ceph-mon quincy/stable 162 no Unit is ready and clustered
ceph-nfs active 1 ceph-nfs quincy/stable 5 no Unit is ready
ceph-osd 17.2.5 active 3 ceph-osd quincy/stable 559 no Unit is ready (1 OSD)
ceph-radosgw 17.2.5 active 1 ceph-radosgw quincy/stable 548 no Unit is ready

Unit Workload Agent Machine Public address Ports Message
ceph-fs/0* active idle 0 54.197.66.72 Unit is ready
ceph-mon/0 active idle 2 54.82.174.61 Unit is ready and clustered
  ceph-dashboard/0* active idle 54.82.174.61 Unit is ready
ceph-mon/1* active idle 3 3.90.227.28 Unit is ready and clustered
  ceph-dashboard/2 active idle 3.90.227.28 Unit is ready
ceph-mon/2 active idle 4 54.221.164.123 Unit is ready and clustered
  ceph-dashboard/1 active idle 54.221.164.123 Unit is ready
ceph-nfs/0* active idle 1 34.200.235.77 Unit is ready
ceph-osd/0 active idle 5 44.214.44.34 Unit is ready (1 OSD)
ceph-osd/1 active idle 6 54.226.124.7 Unit is ready (1 OSD)
ceph-osd/2* active idle 7 100.25.191.181 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 8 3.227.251.105 443/tcp Unit is ready

Machine State Address Inst id Base AZ Message
0 started 54.197.66.72 i-044042572533d684a ubuntu@22.04 us-east-1a running
1 started 34.200.235.77 i-05a1559fb30366223 ubuntu@22.04 us-east-1d running
2 started 54.82.174.61 i-0e3f1c0b1930542d7 ubuntu@22.04 us-east-1e running
3 started 3.90.227.28 i-0aa45eef384ae6746 ubuntu@22.04 us-east-1c running
4 started 54.221.164.123 i-00d5cf8cb721df8d4 ubuntu@22.04 us-east-1b running
5 started 44.214.44.34 i-02b1695175521213c ubuntu@22.04 us-east-1d running
6 started 54.226.124.7 i-09c9f514272d16d27 ubuntu@22.04 us-east-1c running
7 started 100.25.191.181 i-0558266c55636213e ubuntu@22.04 us-east-1e running
8 started 3.227.251.105 i-00eecf4ec3b7d33fc ubuntu@22.04 us-east-1f running

Relation provider Requirer Interface Type Message
ceph-mon:client ceph-nfs:ceph-client ceph-client regular
ceph-mon:dashboard ceph-dashboard:dashboard ceph-dashboard subordinate
ceph-mon:mds ceph-fs:ceph-mds ceph-mds regular
ceph-mon:mon ceph-mon:mon ceph peer
ceph-mon:osd ceph-osd:mon ceph-osd regular
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-nfs:cluster ceph-nfs:cluster ceph-nfs-peer peer
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
ceph-radosgw:radosgw-user ceph-dashboard:radosgw-dashboard radosgw-user regular

$ juju export-bundle
series: jammy
applications:
  ceph-dashboard:
    charm: ceph-dashboard
    channel: quincy/stable
    revision: 25
    options:
      ssl_cert: <REDACTED>
      ssl_key: <REDACTED>
  ceph-fs:
    charm: ceph-fs
    channel: quincy/stable
    revision: 57
    num_units: 1
    to:
    - "0"
    constraints: arch=amd64
  ceph-mon:
    charm: ceph-mon
    channel: quincy/stable
    revision: 162
    resources:
      alert-rules: 1
    num_units: 3
    to:
    - "2"
    - "3"
    - "4"
    options:
      customize-failure-domain: true
      expected-osd-count: 3
      source: distro
    constraints: arch=amd64
  ceph-nfs:
    charm: ceph-nfs
    channel: quincy/stable
    revision: 5
    num_units: 1
    to:
    - "1"
    constraints: arch=amd64
  ceph-osd:
    charm: ceph-osd
    channel: quincy/stable
    revision: 559
    num_units: 3
    to:
    - "5"
    - "6"
    - "7"
    options:
      aa-profile-mode: complain
      autotune: false
      bluestore: true
      bluestore-compression-mode: none
      customize-failure-domain: true
      osd-devices: /dev/disk/by-dname/sdc
      osd-encrypt: false
      source: distro
    constraints: arch=amd64
    storage:
      bluestore-db: loop,1024M
      bluestore-wal: loop,1024M
      cache-devices: loop,10240M
      osd-devices: ebs,1,32768M
      osd-journals: loop,1024M
  ceph-radosgw:
    charm: ceph-radosgw
    channel: quincy/stable
    revision: 548
    num_units: 1
    to:
    - "8"
    options:
      source: distro
      ssl_cert: <REDACTED>
      ssl_key: <REDACTED>
    constraints: arch=amd64
machines:
  "0":
    constraints: root-disk=32768 instance-type=t2.large
  "1":
    constraints: root-disk=32768 instance-type=t2.large
  "2":
    constraints: root-disk=32768 instance-type=t2.large
  "3":
    constraints: root-disk=32768 instance-type=t2.large
  "4":
    constraints: root-disk=32768 instance-type=t2.large
  "5":
    constraints: root-disk=32768 instance-type=t2.large
  "6":
    constraints: root-disk=32768 instance-type=t2.large
  "7":
    constraints: root-disk=32768 instance-type=t2.large
  "8":
    constraints: root-disk=32768 instance-type=t2.large
relations:
- - ceph-dashboard:dashboard
  - ceph-mon:dashboard
- - ceph-dashboard:radosgw-dashboard
  - ceph-radosgw:radosgw-user
- - ceph-mon:osd
  - ceph-osd:mon
- - ceph-mon:radosgw
  - ceph-radosgw:mon
- - ceph-fs:ceph-mds
  - ceph-mon:mds
- - ceph-nfs:ceph-client
  - ceph-mon:client

Revision history for this message
Abdullah Cosgun (acsgn) wrote :

Hi, I would like to work on this bug for my onboarding task. How should I proceed, who should I discuss? Thanks!

Changed in charm-ceph-nfs:
status: New → Triaged
importance: Undecided → High
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.