root@ceph-node1:/etc/ceph# ceph-deploy disk zap ceph-node1:dm-2 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.32): /usr/bin/ceph-deploy disk zap ceph-node1:dm-2 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : zap [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] disk : [('ceph-node1', '/dev/dm-2', None)] [ceph_deploy.osd][DEBUG ] zapping /dev/dm-2 on ceph-node1 [ceph-node1][DEBUG ] connected to host: ceph-node1 [ceph-node1][DEBUG ] detect platform information from remote host [ceph-node1][DEBUG ] detect machine type [ceph-node1][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: Ubuntu 16.04 xenial [ceph-node1][DEBUG ] zeroing last few blocks of device [ceph-node1][DEBUG ] find the location of an executable [ceph-node1][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/dm-2 [ceph-node1][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating [ceph-node1][WARNIN] backup header from main header. [ceph-node1][WARNIN] [ceph-node1][DEBUG ] **************************************************************************** [ceph-node1][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk [ceph-node1][DEBUG ] verification and recovery are STRONGLY recommended. [ceph-node1][DEBUG ] **************************************************************************** [ceph-node1][DEBUG ] Warning: The kernel is still using the old partition table. [ceph-node1][DEBUG ] The new table will be used at the next reboot or after you [ceph-node1][DEBUG ] run partprobe(8) or kpartx(8) [ceph-node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or [ceph-node1][DEBUG ] other utilities. [ceph-node1][DEBUG ] Creating new GPT entries. [ceph-node1][DEBUG ] Warning: The kernel is still using the old partition table. [ceph-node1][DEBUG ] The new table will be used at the next reboot or after you [ceph-node1][DEBUG ] run partprobe(8) or kpartx(8) [ceph-node1][DEBUG ] The operation has completed successfully. [ceph_deploy.osd][DEBUG ] Calling partprobe on zapped device /dev/dm-2 [ceph-node1][DEBUG ] find the location of an executable [ceph-node1][INFO ] Running command: /sbin/partprobe /dev/dm-2 root@ceph-node1:/etc/ceph# fdisk -l /dev/dm-2 Disk /dev/dm-2: 5 TiB, 5497558138880 bytes, 10737418240 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: BB3BD467-5A04-486B-AB4D-14243A9EDF72 root@ceph-node1:/etc/ceph# ls -l /dev/dm-* brw-rw---- 1 root disk 252, 0 Nov 10 21:30 /dev/dm-0 brw-rw---- 1 root disk 252, 1 Nov 10 21:30 /dev/dm-1 brw-rw---- 1 ceph ceph 252, 2 Nov 11 22:39 /dev/dm-2 brw-rw---- 1 ceph ceph 252, 3 Nov 10 21:37 /dev/dm-3 brw-rw---- 1 ceph ceph 252, 4 Nov 10 21:37 /dev/dm-4 root@ceph-node1:/etc/ceph# ceph-deploy osd prepare ceph-node1:dm-2 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.32): /usr/bin/ceph-deploy osd prepare ceph-node1:dm-2 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] disk : [('ceph-node1', '/dev/dm-2', None)] [ceph_deploy.cli][INFO ] dmcrypt : False [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] bluestore : None [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : prepare [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] fs_type : xfs [ceph_deploy.cli][INFO ] func : [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] zap_disk : False [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node1:/dev/dm-2: [ceph-node1][DEBUG ] connected to host: ceph-node1 [ceph-node1][DEBUG ] detect platform information from remote host [ceph-node1][DEBUG ] detect machine type [ceph-node1][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: Ubuntu 16.04 xenial [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node1 [ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph_deploy.osd][DEBUG ] Preparing host ceph-node1 disk /dev/dm-2 journal None activate False [ceph-node1][DEBUG ] find the location of an executable [ceph-node1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/dm-2 [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] set_type: Will colocate journal with data on /dev/dm-2 [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] list_partitions_mpath: list_partitions_mpath: /sys/dev/block/252:2/holders/dm-3/dm/uuid uuid = part1-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] list_partitions_mpath: list_partitions_mpath: /sys/dev/block/252:2/holders/dm-4/dm/uuid uuid = part2-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-3 uuid path is /sys/dev/block/252:3/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-3 uuid is part1-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-4 uuid path is /sys/dev/block/252:4/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-4 uuid is part2-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] ptype_tobe_for_name: name = journal [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/dm-2 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:11d4241d-2d88-4978-b7a1-4742c7b5bdc5 --typecode=2:45b0969e-8ae0-4982-bf9d-5a8d867af560 --mbrtogpt -- /dev/dm-2 [ceph-node1][DEBUG ] Setting name! [ceph-node1][DEBUG ] partNum is 1 [ceph-node1][DEBUG ] REALLY setting name! [ceph-node1][DEBUG ] Warning: The kernel is still using the old partition table. [ceph-node1][DEBUG ] The new table will be used at the next reboot or after you [ceph-node1][DEBUG ] run partprobe(8) or kpartx(8) [ceph-node1][DEBUG ] The operation has completed successfully. [ceph-node1][WARNIN] update_partition: Calling partprobe on created device /dev/dm-2 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600 [ceph-node1][WARNIN] command: Running command: /sbin/partprobe /dev/dm-2 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600 [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] list_partitions_mpath: list_partitions_mpath: /sys/dev/block/252:2/holders/dm-4/dm/uuid uuid = part2-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/11d4241d-2d88-4978-b7a1-4742c7b5bdc5 [ceph-node1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/11d4241d-2d88-4978-b7a1-4742c7b5bdc5 [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] set_data_partition: Creating osd partition on /dev/dm-2 [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] ptype_tobe_for_name: name = data [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/dm-2 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:3ab46569-6772-4818-a2ec-efd45543d4f5 --typecode=1:89c57f98-8ae0-4982-bf9d-5a8d867af560 --mbrtogpt -- /dev/dm-2 [ceph-node1][DEBUG ] Setting name! [ceph-node1][DEBUG ] partNum is 0 [ceph-node1][DEBUG ] REALLY setting name! [ceph-node1][DEBUG ] Warning: The kernel is still using the old partition table. [ceph-node1][DEBUG ] The new table will be used at the next reboot or after you [ceph-node1][DEBUG ] run partprobe(8) or kpartx(8) [ceph-node1][DEBUG ] The operation has completed successfully. [ceph-node1][WARNIN] update_partition: Calling partprobe on created device /dev/dm-2 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600 [ceph-node1][WARNIN] command: Running command: /sbin/partprobe /dev/dm-2 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600 [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] list_partitions_mpath: list_partitions_mpath: /sys/dev/block/252:2/holders/dm-3/dm/uuid uuid = part1-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] list_partitions_mpath: list_partitions_mpath: /sys/dev/block/252:2/holders/dm-4/dm/uuid uuid = part2-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/dm-3 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/dm-3 [ceph-node1][DEBUG ] meta-data=/dev/dm-3 isize=2048 agcount=5, agsize=268435455 blks [ceph-node1][DEBUG ] = sectsz=512 attr=2, projid32bit=1 [ceph-node1][DEBUG ] = crc=1 finobt=1, sparse=0 [ceph-node1][DEBUG ] data = bsize=4096 blocks=1340866299, imaxpct=5 [ceph-node1][DEBUG ] = sunit=0 swidth=0 blks [ceph-node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1 [ceph-node1][DEBUG ] log =internal log bsize=4096 blocks=521728, version=2 [ceph-node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1 [ceph-node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0 [ceph-node1][WARNIN] mount: Mounting /dev/dm-3 on /var/lib/ceph/tmp/mnt.lGOywe with options noatime,inode64 [ceph-node1][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/dm-3 /var/lib/ceph/tmp/mnt.lGOywe [ceph-node1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.lGOywe [ceph-node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lGOywe/ceph_fsid.25574.tmp [ceph-node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lGOywe/fsid.25574.tmp [ceph-node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lGOywe/magic.25574.tmp [ceph-node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lGOywe/journal_uuid.25574.tmp [ceph-node1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.lGOywe/journal -> /dev/disk/by-partuuid/11d4241d-2d88-4978-b7a1-4742c7b5bdc5 [ceph-node1][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lGOywe [ceph-node1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.lGOywe [ceph-node1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.lGOywe [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-8ae0-4982-bf9d-5a8d867af560 -- /dev/dm-2 [ceph-node1][DEBUG ] Warning: The kernel is still using the old partition table. [ceph-node1][DEBUG ] The new table will be used at the next reboot or after you [ceph-node1][DEBUG ] run partprobe(8) or kpartx(8) [ceph-node1][DEBUG ] The operation has completed successfully. [ceph-node1][WARNIN] update_partition: Calling partprobe on prepared device /dev/dm-2 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600 [ceph-node1][WARNIN] command: Running command: /sbin/partprobe /dev/dm-2 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600 [ceph-node1][WARNIN] command_check_call: Running command: /sbin/udevadm trigger --action=add --sysname-match dm-3 [ceph-node1][INFO ] checking OSD status... [ceph-node1][DEBUG ] find the location of an executable [ceph-node1][INFO ] Running command: /usr/bin/ceph --cluster=ceph osd stat --format=json [ceph-node1][WARNIN] there is 1 OSD down [ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use. root@ceph-node1:/etc/ceph# ls -l /dev/dm-* brw-rw---- 1 root disk 252, 0 Nov 10 21:30 /dev/dm-0 brw-rw---- 1 root disk 252, 1 Nov 10 21:30 /dev/dm-1 brw-rw---- 1 ceph ceph 252, 2 Nov 11 22:41 /dev/dm-2 brw-rw---- 1 ceph ceph 252, 3 Nov 11 22:41 /dev/dm-3 brw-rw---- 1 ceph ceph 252, 4 Nov 11 22:41 /dev/dm-4 root@ceph-node1:/etc/ceph# ls -l /var/lib/ceph/ total 32 drwxr-xr-x 2 ceph ceph 4096 Nov 10 20:54 bootstrap-mds drwxr-xr-x 2 ceph ceph 4096 Nov 10 20:54 bootstrap-osd drwxr-xr-x 2 ceph ceph 4096 Nov 10 20:54 bootstrap-rgw drwxr-xr-x 2 ceph ceph 4096 Jul 21 20:05 mds drwxr-xr-x 3 ceph ceph 4096 Nov 10 20:54 mon drwxr-xr-x 2 ceph ceph 4096 Nov 10 21:36 osd drwxr-xr-x 2 ceph ceph 4096 Jul 21 20:05 radosgw drwxr-xr-x 2 ceph ceph 4096 Nov 11 22:41 tmp root@ceph-node1:/etc/ceph# ceph-deploy osd activate ceph-node1:dm-2 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.32): /usr/bin/ceph-deploy osd activate ceph-node1:dm-2 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : activate [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] disk : [('ceph-node1', '/dev/dm-2', None)] [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-node1:/dev/dm-2: [ceph-node1][DEBUG ] connected to host: ceph-node1 [ceph-node1][DEBUG ] detect platform information from remote host [ceph-node1][DEBUG ] detect machine type [ceph-node1][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: Ubuntu 16.04 xenial [ceph_deploy.osd][DEBUG ] activating host ceph-node1 disk /dev/dm-2 [ceph_deploy.osd][DEBUG ] will use init type: systemd [ceph-node1][DEBUG ] find the location of an executable [ceph-node1][INFO ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/dm-2 [ceph-node1][WARNIN] main_activate: path = /dev/dm-2 [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid path is /sys/dev/block/252:2/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-2 uuid is mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/dm-2 [ceph-node1][WARNIN] Traceback (most recent call last): [ceph-node1][WARNIN] File "/usr/sbin/ceph-disk", line 9, in [ceph-node1][WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4995, in run [ceph-node1][WARNIN] main(sys.argv[1:]) [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4946, in main [ceph-node1][WARNIN] args.func(args) [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3299, in main_activate [ceph-node1][WARNIN] reactivate=args.reactivate, [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3009, in mount_activate [ceph-node1][WARNIN] e, [ceph-node1][WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/dm-2: Line is truncated: [ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/dm-2 root@ceph-node1:/etc/ceph# ceph-deploy osd activate ceph-node1:dm-3 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.32): /usr/bin/ceph-deploy osd activate ceph-node1:dm-3 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : activate [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] disk : [('ceph-node1', '/dev/dm-3', None)] [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-node1:/dev/dm-3: [ceph-node1][DEBUG ] connected to host: ceph-node1 [ceph-node1][DEBUG ] detect platform information from remote host [ceph-node1][DEBUG ] detect machine type [ceph-node1][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: Ubuntu 16.04 xenial [ceph_deploy.osd][DEBUG ] activating host ceph-node1 disk /dev/dm-3 [ceph_deploy.osd][DEBUG ] will use init type: systemd [ceph-node1][DEBUG ] find the location of an executable [ceph-node1][INFO ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/dm-3 [ceph-node1][WARNIN] main_activate: path = /dev/dm-3 [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-3 uuid path is /sys/dev/block/252:3/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-3 uuid is part1-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-3 uuid path is /sys/dev/block/252:3/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-3 uuid is part1-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/dm-3 [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-3 uuid path is /sys/dev/block/252:3/dm/uuid [ceph-node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/dm-3 uuid is part1-mpath-36005076000828021d000000000000139 [ceph-node1][WARNIN] [ceph-node1][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/dm-3 [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs [ceph-node1][WARNIN] mount: Mounting /dev/dm-3 on /var/lib/ceph/tmp/mnt.rOYAEG with options noatime,inode64 [ceph-node1][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/dm-3 /var/lib/ceph/tmp/mnt.rOYAEG [ceph-node1][WARNIN] activate: Cluster uuid is 4e7c38a5-65fd-4211-adb4-9c340fdc0da2 [ceph-node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [ceph-node1][WARNIN] activate: Cluster name is ceph [ceph-node1][WARNIN] activate: OSD uuid is 3ab46569-6772-4818-a2ec-efd45543d4f5 [ceph-node1][WARNIN] activate: OSD id is 0 [ceph-node1][WARNIN] activate: Initializing OSD... [ceph-node1][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.rOYAEG/activate.monmap [ceph-node1][WARNIN] got monmap epoch 1 [ceph-node1][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/lib/ceph/tmp/mnt.rOYAEG/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.rOYAEG --osd-journal /var/lib/ceph/tmp/mnt.rOYAEG/journal --osd-uuid 3ab46569-6772-4818-a2ec-efd45543d4f5 --keyring /var/lib/ceph/tmp/mnt.rOYAEG/keyring --setuser ceph --setgroup ceph [ceph-node1][WARNIN] mount_activate: Failed to activate [ceph-node1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.rOYAEG [ceph-node1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.rOYAEG [ceph-node1][WARNIN] Traceback (most recent call last): [ceph-node1][WARNIN] File "/usr/sbin/ceph-disk", line 9, in [ceph-node1][WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4995, in run [ceph-node1][WARNIN] main(sys.argv[1:]) [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4946, in main [ceph-node1][WARNIN] args.func(args) [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3299, in main_activate [ceph-node1][WARNIN] reactivate=args.reactivate, [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3056, in mount_activate [ceph-node1][WARNIN] (osd_id, cluster) = activate(path, activate_key_template, init) [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3232, in activate [ceph-node1][WARNIN] keyring=keyring, [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2725, in mkfs [ceph-node1][WARNIN] '--setgroup', get_ceph_group(), [ceph-node1][WARNIN] File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2672, in ceph_osd_mkfs [ceph-node1][WARNIN] raise Error('%s failed : %s' % (str(arguments), error)) [ceph-node1][WARNIN] ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '0', '--monmap', '/var/lib/ceph/tmp/mnt.rOYAEG/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.rOYAEG', '--osd-journal', '/var/lib/ceph/tmp/mnt.rOYAEG/journal', '--osd-uuid', '3ab46569-6772-4818-a2ec-efd45543d4f5', '--keyring', '/var/lib/ceph/tmp/mnt.rOYAEG/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2016-11-11 22:44:18.189869 7f87a13408c0 -1 filestore(/var/lib/ceph/tmp/mnt.rOYAEG) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.rOYAEG/journal: (13) Permission denied [ceph-node1][WARNIN] 2016-11-11 22:44:18.189892 7f87a13408c0 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13 [ceph-node1][WARNIN] 2016-11-11 22:44:18.189942 7f87a13408c0 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.rOYAEG: (13) Permission denied [ceph-node1][WARNIN] [ceph-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/dm-3