ceph can't use unmounted ephemeral storage (vdb)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
ceph (Juju Charms Collection) |
Invalid
|
Undecided
|
Unassigned |
Bug Description
Deploying ceph on a vivid-daily image in an openstack enviroment:
[ 0.000000] Linux version 3.18.0-13-generic (buildd@komainu) (gcc version 4.9.2 (Ubuntu 4.9.2-10ubuntu5) ) #14-Ubuntu SMP Fri Feb 6 09:55:14 UTC 2015 (Ubuntu 3.18.0-
[ 0.000000] Command line: BOOT_IMAGE=
ceph:
branch: lp:~openstack-charmers/charms/trusty/ceph/next
num_units: 3
constraints: mem=1G
options:
fsid: 6547bd3e-
mon-relation-join fails during osd_ize of device /dev/vdb.
The charm umounts /mnt and then proceeds to format /dev/vdb; This completes fine, when running
ceph-disk-prepare it fails:
# ceph-disk-prepare --fs-type xfs --zap-disk /dev/vdb
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.
*******
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
*******
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
Setting name!
partNum is 1
REALLY setting name!
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
mkfs.xfs: cannot open /dev/vdb1: Device or resource busy
ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/vdb1']' returned non-zero exit status 1
When manually running mkfs on /dev/vdb it complains there is a mounted filesystem.
root@juju-
major minor #blocks name
253 0 10485760 vda
253 1 10484736 vda1
253 16 10485760 vdb
root@juju-
mkfs.xfs: /dev/vdb contains a mounted filesystem
Usage: mkfs.xfs
/* blocksize */ [-b log=n|size=num]
/* metadata */ [-m crc=0|1,finobt=0|1]
/* data subvol */ [-d agcount=
/* force overwrite */ [-f]
/* inode size */ [-i log=n|perblock=
/* no discard */ [-K]
/* log subvol */ [-l agnum=n,
/* label */ [-L label (maximum 12 characters)]
/* naming */ [-n log=n|size=
/* no-op info only */ [-N]
/* prototype file */ [-p fname]
/* quiet */ [-q]
/* realtime subvol */ [-r extsize=
/* sectorsize */ [-s log=n|size=num]
/* version */ [-V]
devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).
Nothing is seen in proc mounts, but lsof does show a journal kthread still open:
root@juju-
jbd2/vdb- 1159 root cwd DIR 253,1 4096 2 /
jbd2/vdb- 1159 root rtd DIR 253,1 4096 2 /
jbd2/vdb- 1159 root txt unknown /proc/1159/exe
root@juju-
root 1159 0.0 0.0 0 0 ? S 15:18 0:00 [jbd2/vdb-8]
Changed in ceph (Juju Charms Collection): | |
status: | New → Invalid |
If you're deploying Ceph *in* OpenStack, and /dev/vdb is your ephemeral storage, then you can easily work around this by booting your Nova VM with the following user-data:
#cloud-config
mounts:
- ['vdb', null]
That way, cloud-init won't mount, or even touch, your /dev/vdb on initial boot.