So I add "-d" parameter to debug the problem:
ceph-osd -i -d "${OSD_ID}" --mkfs -k "${OSD_DIR}"/keyring --osd-uuid "${OSD_UUID}"
```
++ ceph-osd -d -i 0 --mkfs -k /var/lib/ceph/osd/ceph-0/keyring --osd-uuid e14d5061-ae41-4c16-bf3c-2e9c5973cb54
2018-06-13 17:29:53.216034 7f808b6a0d80 0 ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable), process (unknown), pid 78
2018-06-13 17:29:53.243358 7f808b6a0d80 0 stack NetworkStack max thread limit is 24, switching to this now. Higher thread values are unnecessary and currently unsupported.
2018-06-13 17:29:53.248479 7f808b6a0d80 1 bluestore(/var/lib/ceph/osd/ceph-0) mkfs path /var/lib/ceph/osd/ceph-0
2018-06-13 17:29:53.248676 7f808b6a0d80 -1 bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
2018-06-13 17:29:53.248714 7f808b6a0d80 -1 bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
2018-06-13 17:29:53.249134 7f808b6a0d80 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid
2018-06-13 17:29:53.249141 7f808b6a0d80 1 bluestore(/var/lib/ceph/osd/ceph-0) mkfs using provided fsid e14d5061-ae41-4c16-bf3c-2e9c5973cb54
2018-06-13 17:29:53.249361 7f808b6a0d80 1 bdev create path /var/lib/ceph/osd/ceph-0/block type kernel
2018-06-13 17:29:53.249372 7f808b6a0d80 1 bdev(0x563ef1b19600 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
2018-06-13 17:29:53.249400 7f808b6a0d80 -1 bdev(0x563ef1b19600 /var/lib/ceph/osd/ceph-0/block) open open got: (2) No such file or directory
2018-06-13 17:29:53.249654 7f808b6a0d80 -1 bluestore(/var/lib/ceph/osd/ceph-0) mkfs failed, (2) No such file or directory
2018-06-13 17:29:53.249662 7f808b6a0d80 -1 OSD::mkfs: ObjectStore::mkfs failed with error (2) No such file or directory
2018-06-13 17:29:53.249950 7f808b6a0d80 -1 ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-0: (2) No such file or directory
```
After my testing, I found that after executing this command:
```
sgdisk "--change-name=2:KOLLA_CEPH_DATA_BS_B_${OSD_ID}" "--typecode=2:${CEPH_OSD_TYPE_CODE}" -- "${OSD_BS_BLOCK_DEV}"
```
It took 3 seconds to generate the by-partlabel folder and partlabel on my centos virtual machine, but the partuuid was generated immediately without delay.
So I think using partuuid is better than partlabel, when initializing ceph.
In the ceph-deploy tool, the command to initialize osd is 'ceph-disk preapre', which also uses partuuid.
2.
In the current find_disks.py logic, both bluestore and filestore return all disk information, including osd partition, journal partition, block partition, wal partition and db partition.
```
"bs_db_device": "",
"bs_db_label": "",
"bs_db_partition_num": "",
"bs_wal_device": "",
"bs_wal_label": "",
"bs_wal_partition_num": "",
"device": "/dev/xvdb",
"external_journal": false,
"fs_label": "",
"fs_uuid": "cd711f44-2fa8-41c8-8f74-b43e96758edd",
"journal": "",
"journal_device": "",
"journal_num": 0,
"partition": "/dev/xvdb",
"partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS",
"partition_num": "1"
```
There is a bit of confusion here. In fact, in the filestore, there are only osd partition and journal partition. In the bluestore, there are osd data partition, block partition, wal partition and db partition.
I think we should distinguish between the bluestore and filestore disk information.
3.
The osd partition lable after successful initialization is as follows:
I tested the latest bluestore code, it is the patch of Mr Tonezhang:
https:/ /review. openstack. org/#/c/ 566810/ /review. openstack. org/#/c/ 566801/ 9
https:/
1.
In my tests, I encountered a problem that osd bootstrap failed when executing this command:
ceph-osd -i "${OSD_ID}" --mkfs -k "${OSD_ DIR}"/keyring --osd-uuid "${OSD_UUID}"
the los as follows:
``` by-partlabel/ KOLLA_CEPH_ DATA_BS_ B_2 /var/lib/ ceph/osd/ ceph-2/ block ceph/osd/ ceph-2/ keyring --osd-uuid b5703869- 87d1-4ab8- be11-ab24db2870 cc
++ partprobe
++ ln -sf /dev/disk/
++ '[' -n '' ']'
++ '[' -n '' ']'
++ ceph-osd -i 2 --mkfs -k /var/lib/
```
So I add "-d" parameter to debug the problem: DIR}"/keyring --osd-uuid "${OSD_UUID}"
ceph-osd -i -d "${OSD_ID}" --mkfs -k "${OSD_
``` ceph/osd/ ceph-0/ keyring --osd-uuid e14d5061- ae41-4c16- bf3c-2e9c5973cb 54 92274171586c827 e01f554a70a) luminous (stable), process (unknown), pid 78 /var/lib/ ceph/osd/ ceph-0) mkfs path /var/lib/ ceph/osd/ ceph-0 /var/lib/ ceph/osd/ ceph-0/ block) _read_bdev_label failed to open /var/lib/ ceph/osd/ ceph-0/ block: (2) No such file or directory /var/lib/ ceph/osd/ ceph-0/ block) _read_bdev_label failed to open /var/lib/ ceph/osd/ ceph-0/ block: (2) No such file or directory /var/lib/ ceph/osd/ ceph-0) _read_fsid unparsable uuid /var/lib/ ceph/osd/ ceph-0) mkfs using provided fsid e14d5061- ae41-4c16- bf3c-2e9c5973cb 54 ceph/osd/ ceph-0/ block type kernel ceph/osd/ ceph-0/ block) open path /var/lib/ ceph/osd/ ceph-0/ block ceph/osd/ ceph-0/ block) open open got: (2) No such file or directory /var/lib/ ceph/osd/ ceph-0) mkfs failed, (2) No such file or directory ceph/osd/ ceph-0: (2) No such file or directory
++ ceph-osd -d -i 0 --mkfs -k /var/lib/
2018-06-13 17:29:53.216034 7f808b6a0d80 0 ceph version 12.2.5 (cad919881333ac
2018-06-13 17:29:53.243358 7f808b6a0d80 0 stack NetworkStack max thread limit is 24, switching to this now. Higher thread values are unnecessary and currently unsupported.
2018-06-13 17:29:53.248479 7f808b6a0d80 1 bluestore(
2018-06-13 17:29:53.248676 7f808b6a0d80 -1 bluestore(
2018-06-13 17:29:53.248714 7f808b6a0d80 -1 bluestore(
2018-06-13 17:29:53.249134 7f808b6a0d80 -1 bluestore(
2018-06-13 17:29:53.249141 7f808b6a0d80 1 bluestore(
2018-06-13 17:29:53.249361 7f808b6a0d80 1 bdev create path /var/lib/
2018-06-13 17:29:53.249372 7f808b6a0d80 1 bdev(0x563ef1b19600 /var/lib/
2018-06-13 17:29:53.249400 7f808b6a0d80 -1 bdev(0x563ef1b19600 /var/lib/
2018-06-13 17:29:53.249654 7f808b6a0d80 -1 bluestore(
2018-06-13 17:29:53.249662 7f808b6a0d80 -1 OSD::mkfs: ObjectStore::mkfs failed with error (2) No such file or directory
2018-06-13 17:29:53.249950 7f808b6a0d80 -1 ** ERROR: error creating empty object store in /var/lib/
```
After my testing, I found that after executing this command:
``` name=2: KOLLA_CEPH_ DATA_BS_ B_${OSD_ ID}" "--typecode= 2:${CEPH_ OSD_TYPE_ CODE}" -- "${OSD_ BS_BLOCK_ DEV}"
sgdisk "--change-
```
It took 3 seconds to generate the by-partlabel folder and partlabel on my centos virtual machine, but the partuuid was generated immediately without delay.
So I think using partuuid is better than partlabel, when initializing ceph.
In the ceph-deploy tool, the command to initialize osd is 'ceph-disk preapre', which also uses partuuid.
2.
In the current find_disks.py logic, both bluestore and filestore return all disk information, including osd partition, journal partition, block partition, wal partition and db partition. partition_ num": "", partition_ num": "", 2fa8-41c8- 8f74-b43e96758e dd", CEPH_OSD_ BOOTSTRAP_ BS",
```
"bs_db_device": "",
"bs_db_label": "",
"bs_db_
"bs_wal_device": "",
"bs_wal_label": "",
"bs_wal_
"device": "/dev/xvdb",
"external_journal": false,
"fs_label": "",
"fs_uuid": "cd711f44-
"journal": "",
"journal_device": "",
"journal_num": 0,
"partition": "/dev/xvdb",
"partition_label": "KOLLA_
"partition_num": "1"
```
There is a bit of confusion here. In fact, in the filestore, there are only osd partition and journal partition. In the bluestore, there are osd data partition, block partition, wal partition and db partition.
I think we should distinguish between the bluestore and filestore disk information.
3.
The osd partition lable after successful initialization is as follows:
``` DATA_BS_ B_1 DATA_BS_ D_1 DATA_BS_ W_1
KOLLA_CEPH_BSDATA_1
KOLLA_CEPH_
KOLLA_CEPH_
KOLLA_CEPH_
```
The prefix is different so we can't find the disk as the filestore's logic.
So I think a good way to name it like this:
``` DATA_BS_ 1 DATA_BS_ 1_B DATA_BS_ 1_D DATA_BS_ 1_W
KOLLA_CEPH_
KOLLA_CEPH_
KOLLA_CEPH_
KOLLA_CEPH_
```
Regular naming can reduce some code.