Create share action should not ask for a vip
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph NFS Charm |
Triaged
|
High
|
Unassigned |
Bug Description
I'm running a small ceph cluster on AWS Jammy machines with one ceph-NFS unit on quincy/stable rev 5. Juju version is 3.2.0.
Running create share action results with:
$ juju run ceph-nfs/leader create-share name=test size=10 allowed-
Running operation 37 with 1 task
- task 38 on unit-ceph-nfs-0
Waiting for task 38...
ERROR the following task failed:
- id "38" with return code 1
use 'juju show-task' to inspect the failure
$ juju show-operation 37
summary: create-share run on ceph-nfs/leader
status: failed
action:
name: create-share
parameters:
allowed-ips: 10.0.0.0/24
name: test
size: 10
timing:
enqueued: 2023-06-01 16:13:48 +0400 +04
started: 2023-06-01 16:13:49 +0400 +04
completed: 2023-06-01 16:13:49 +0400 +04
tasks:
"38":
host: ceph-nfs/0
status: failed
timing:
enqueued: 2023-06-01 16:13:48 +0400 +04
started: 2023-06-01 16:13:49 +0400 +04
completed: 2023-06-01 16:13:49 +0400 +04
message: exit status 1
results:
return-code: 1
Inspecting /var/log/
2023-06-01 12:13:48 INFO unit.ceph-
2023-06-01 12:13:48 INFO unit.ceph-
2023-06-01 12:13:48 ERROR unit.ceph-
Traceback (most recent call last):
File "/var/lib/
main(
File "/var/lib/
_emit_
File "/var/lib/
event_
File "/var/lib/
framework.
File "/var/lib/
self.
File "/var/lib/
custom_
File "/var/lib/
"ip": self.access_
File "/var/lib/
return self._get_
File "/var/lib/
bindings[
File "/var/lib/
for vip in self.vips
File "/var/lib/
return self.config.
AttributeError: 'NoneType' object has no attribute 'split'
My vip config is indeed empty since it's not a mandatory field.
Expected outcome: create-share action completes even without a vip.
$ juju status --relations
Model Controller Cloud/Region Version SLA Timestamp
ceph aws-controller aws/us-east-1 3.2.0 unsupported 16:23:17+04:00
App Version Status Scale Charm Channel Rev Exposed Message
ceph-dashboard active 3 ceph-dashboard quincy/stable 25 no Unit is ready
ceph-fs 17.2.5 active 1 ceph-fs quincy/stable 57 no Unit is ready
ceph-mon 17.2.5 active 3 ceph-mon quincy/stable 162 no Unit is ready and clustered
ceph-nfs active 1 ceph-nfs quincy/stable 5 no Unit is ready
ceph-osd 17.2.5 active 3 ceph-osd quincy/stable 559 no Unit is ready (1 OSD)
ceph-radosgw 17.2.5 active 1 ceph-radosgw quincy/stable 548 no Unit is ready
Unit Workload Agent Machine Public address Ports Message
ceph-fs/0* active idle 0 54.197.66.72 Unit is ready
ceph-mon/0 active idle 2 54.82.174.61 Unit is ready and clustered
ceph-dashboard/0* active idle 54.82.174.61 Unit is ready
ceph-mon/1* active idle 3 3.90.227.28 Unit is ready and clustered
ceph-dashboard/2 active idle 3.90.227.28 Unit is ready
ceph-mon/2 active idle 4 54.221.164.123 Unit is ready and clustered
ceph-dashboard/1 active idle 54.221.164.123 Unit is ready
ceph-nfs/0* active idle 1 34.200.235.77 Unit is ready
ceph-osd/0 active idle 5 44.214.44.34 Unit is ready (1 OSD)
ceph-osd/1 active idle 6 54.226.124.7 Unit is ready (1 OSD)
ceph-osd/2* active idle 7 100.25.191.181 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 8 3.227.251.105 443/tcp Unit is ready
Machine State Address Inst id Base AZ Message
0 started 54.197.66.72 i-044042572533d684a ubuntu@22.04 us-east-1a running
1 started 34.200.235.77 i-05a1559fb30366223 ubuntu@22.04 us-east-1d running
2 started 54.82.174.61 i-0e3f1c0b1930542d7 ubuntu@22.04 us-east-1e running
3 started 3.90.227.28 i-0aa45eef384ae6746 ubuntu@22.04 us-east-1c running
4 started 54.221.164.123 i-00d5cf8cb721df8d4 ubuntu@22.04 us-east-1b running
5 started 44.214.44.34 i-02b1695175521213c ubuntu@22.04 us-east-1d running
6 started 54.226.124.7 i-09c9f514272d16d27 ubuntu@22.04 us-east-1c running
7 started 100.25.191.181 i-0558266c55636213e ubuntu@22.04 us-east-1e running
8 started 3.227.251.105 i-00eecf4ec3b7d33fc ubuntu@22.04 us-east-1f running
Relation provider Requirer Interface Type Message
ceph-mon:client ceph-nfs:
ceph-mon:dashboard ceph-dashboard:
ceph-mon:mds ceph-fs:ceph-mds ceph-mds regular
ceph-mon:mon ceph-mon:mon ceph peer
ceph-mon:osd ceph-osd:mon ceph-osd regular
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
ceph-nfs:cluster ceph-nfs:cluster ceph-nfs-peer peer
ceph-radosgw:
ceph-radosgw:
$ juju export-bundle
series: jammy
applications:
ceph-dashboard:
charm: ceph-dashboard
channel: quincy/stable
revision: 25
options:
ssl_cert: <REDACTED>
ssl_key: <REDACTED>
ceph-fs:
charm: ceph-fs
channel: quincy/stable
revision: 57
num_units: 1
to:
- "0"
constraints: arch=amd64
ceph-mon:
charm: ceph-mon
channel: quincy/stable
revision: 162
resources:
alert-rules: 1
num_units: 3
to:
- "2"
- "3"
- "4"
options:
customize
expected-
source: distro
constraints: arch=amd64
ceph-nfs:
charm: ceph-nfs
channel: quincy/stable
revision: 5
num_units: 1
to:
- "1"
constraints: arch=amd64
ceph-osd:
charm: ceph-osd
channel: quincy/stable
revision: 559
num_units: 3
to:
- "5"
- "6"
- "7"
options:
aa-
autotune: false
bluestore: true
bluestore
customize
osd-devices: /dev/disk/
osd-encrypt: false
source: distro
constraints: arch=amd64
storage:
bluestore-db: loop,1024M
bluestore
cache-
osd-devices: ebs,1,32768M
osd-journals: loop,1024M
ceph-radosgw:
charm: ceph-radosgw
channel: quincy/stable
revision: 548
num_units: 1
to:
- "8"
options:
source: distro
ssl_cert: <REDACTED>
ssl_key: <REDACTED>
constraints: arch=amd64
machines:
"0":
constraints: root-disk=32768 instance-
"1":
constraints: root-disk=32768 instance-
"2":
constraints: root-disk=32768 instance-
"3":
constraints: root-disk=32768 instance-
"4":
constraints: root-disk=32768 instance-
"5":
constraints: root-disk=32768 instance-
"6":
constraints: root-disk=32768 instance-
"7":
constraints: root-disk=32768 instance-
"8":
constraints: root-disk=32768 instance-
relations:
- - ceph-dashboard:
- ceph-mon:dashboard
- - ceph-dashboard:
- ceph-radosgw:
- - ceph-mon:osd
- ceph-osd:mon
- - ceph-mon:radosgw
- ceph-radosgw:mon
- - ceph-fs:ceph-mds
- ceph-mon:mds
- - ceph-nfs:
- ceph-mon:client
Changed in charm-ceph-nfs: | |
status: | New → Triaged |
importance: | Undecided → High |
Hi, I would like to work on this bug for my onboarding task. How should I proceed, who should I discuss? Thanks!