Cannot deploy with bluestore enabled.

Bug #1767739 reported by Michael Quiniola
18
This bug affects 2 people
Affects Status Importance Assigned to Milestone
ceph-osd (Juju Charms Collection)
New
Undecided
Unassigned

Bug Description

I'm trying to deploy ceph-osd as apart of the openstack bundle and enabled bluestore.
Ceph-osd deployment fails with "hook failed: "mon-relation-changed" for ceph-mon:osd" as the status.

The log entry in /var/log/jujud/unit-ceph-osd is:

 GPT data structures destroyed! You may now partition the disk using fdisk or
2018-04-28 23:34:49 DEBUG mon-relation-changed other utilities.
2018-04-28 23:34:50 DEBUG mon-relation-changed Creating new GPT entries.
2018-04-28 23:34:50 DEBUG mon-relation-changed The operation has completed successfully.
2018-04-28 23:34:51 DEBUG mon-relation-changed Setting name!
2018-04-28 23:34:51 DEBUG mon-relation-changed partNum is 0
2018-04-28 23:34:51 DEBUG mon-relation-changed REALLY setting name!
2018-04-28 23:34:51 DEBUG mon-relation-changed The operation has completed successfully.
2018-04-28 23:34:53 DEBUG mon-relation-changed Setting name!
2018-04-28 23:34:53 DEBUG mon-relation-changed partNum is 1
2018-04-28 23:34:53 DEBUG mon-relation-changed REALLY setting name!
2018-04-28 23:34:53 DEBUG mon-relation-changed The operation has completed successfully.
2018-04-28 23:34:56 DEBUG mon-relation-changed The operation has completed successfully.
2018-04-28 23:34:58 DEBUG mon-relation-changed meta-data=/dev/sdc1 isize=2048 agcount=4, agsize=6400 blks
2018-04-28 23:34:58 DEBUG mon-relation-changed = sectsz=512 attr=2, projid32bit=1
2018-04-28 23:34:58 DEBUG mon-relation-changed = crc=1 finobt=1, sparse=0
2018-04-28 23:34:58 DEBUG mon-relation-changed data = bsize=4096 blocks=25600, imaxpct=25
2018-04-28 23:34:58 DEBUG mon-relation-changed = sunit=0 swidth=0 blks
2018-04-28 23:34:58 DEBUG mon-relation-changed naming =version 2 bsize=4096 ascii-ci=0 ftype=1
2018-04-28 23:34:58 DEBUG mon-relation-changed log =internal log bsize=4096 blocks=864, version=2
2018-04-28 23:34:58 DEBUG mon-relation-changed = sectsz=512 sunit=0 blks, lazy-count=1
2018-04-28 23:34:58 DEBUG mon-relation-changed realtime =none extsz=4096 blocks=0, rtextents=0
2018-04-28 23:35:00 DEBUG mon-relation-changed The operation has completed successfully.
2018-04-28 23:35:01 DEBUG mon-relation-changed Traceback (most recent call last):
2018-04-28 23:35:01 DEBUG mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/mon-relation-changed", line 562, in <module>
2018-04-28 23:35:01 DEBUG mon-relation-changed hooks.execute(sys.argv)
2018-04-28 23:35:01 DEBUG mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/charmhelpers/core/hookenv.py", line 800, in execute
2018-04-28 23:35:01 DEBUG mon-relation-changed self._hooks[hook_name]()
2018-04-28 23:35:01 DEBUG mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/mon-relation-changed", line 489, in mon_relation
2018-04-28 23:35:01 DEBUG mon-relation-changed prepare_disks_and_activate()
2018-04-28 23:35:01 DEBUG mon-relation-changed File "/var/lib/juju/agents/unit-ceph-osd-2/charm/hooks/mon-relation-changed", line 392, in prepare_disks_and_activate
2018-04-28 23:35:01 DEBUG mon-relation-changed config('bluestore'))
2018-04-28 23:35:01 DEBUG mon-relation-changed File "lib/ceph/utils.py", line 1447, in osdize
2018-04-28 23:35:01 DEBUG mon-relation-changed bluestore)
2018-04-28 23:35:01 DEBUG mon-relation-changed File "lib/ceph/utils.py", line 1534, in osdize_dev
2018-04-28 23:35:01 DEBUG mon-relation-changed db.set('osd-devices', osd_devices)
2018-04-28 23:35:01 DEBUG mon-relation-changed AttributeError: 'set' object has no attribute 'set'
2018-04-28 23:35:01 ERROR juju.worker.uniter.operation runhook.go:113 hook "mon-relation-changed" failed: exit status 1

This is Charm Version 261
Juju Version 2.3.7-xenial-amd64

For now I am disabling the Juju Agents on these machines because bluestore saved my butt with data recovery and I *REFUSE* to use ceph without it from now on.

Revision history for this message
Michael Quiniola (qthepirate) wrote :

Disregard:

I had openstack-origin set to distro instead of cloud:xenial-queens

Changed in ceph-osd (Juju Charms Collection):
status: New → Invalid
Revision history for this message
Florian Guitton (f-guitton) wrote :

Hello everyone !

In fact I am having the exact same issue, but my "openstack-origin" as far as I can see is definitely setup to "cloud:xenial-queens".

Would anybody have any pointer about this ?

Best,

Changed in ceph-osd (Juju Charms Collection):
status: Invalid → New
Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

Florian: For the ceph* charms, "source" is the configuration key to be concerned with rather than "openstack-origin"

Revision history for this message
Florian Guitton (f-guitton) wrote :

Hello Chris, yes, absolutely I referenced the previous post but I am most definitely meaning "source".
My config goes as follow:

maas-region:~$ juju config ceph-osd
application: ceph-osd
charm: ceph-osd
settings:
  aa-profile-mode:
    value: enforce
  autotune:
    value: false
  availability_zone:
    source: unset
  bluestore:
    value: true
  bluestore-block-db-size:
    value: 0
  bluestore-block-wal-size:
    value: 0
  bluestore-db:
    source: unset
  bluestore-wal:
    source: unset
  ceph-cluster-network:
    source: unset
  ceph-public-network:
    source: unset
  config-flags:
    source: unset
  crush-initial-weight:
    source: unset
  customize-failure-domain:
    value: true
  ephemeral-unmount:
    source: unset
  harden:
    source: unset
  ignore-device-errors:
    value: false
  key:
    source: unset
  loglevel:
    value: 1
  max-sectors-kb:
    value: 1.048576e+06
  nagios_context:
    value: dsi-r1
  nagios_servicegroups:
    value: ""
  osd-devices:
    value: /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
      /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr
      /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy /dev/sdz
  osd-encrypt:
    value: true
  osd-format:
    value: xfs
  osd-journal:
    value: ""
  osd-journal-size:
    value: 1024
  osd-max-backfills:
    source: unset
  osd-recovery-max-active:
    source: unset
  osd-reformat:
    source: unset
  prefer-ipv6:
    value: false
  source:
    value: cloud:xenial-queens
  sysctl:
    value: '{ kernel.pid_max : 2097152, vm.max_map_count : 524288, kernel.threads-max:
      2097152, vm.vfs_cache_pressure: 1, vm.swappiness: 1 }'
  use-direct-io:
    value: true
  use-syslog:
    value: false

Revision history for this message
Florian Guitton (f-guitton) wrote :

I must precise this is using latest published Charm 261

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.